WorldWideScience

Sample records for surface normal parallel

  1. Density functional study of a typical thiol tethered on a gold surface: ruptures under normal or parallel stretch

    International Nuclear Information System (INIS)

    Wang, Guan M; Sandberg, William C; Kenny, Steven D

    2006-01-01

    The mechanical and dynamical properties of a model Au(111)/thiol surface system were investigated by using localized atomic-type orbital density functional theory in the local density approximation. Relaxing the system gives a configuration where the sulfur atom forms covalent bonds to two adjacent gold atoms as the lowest energy structure. Investigations based on ab initio molecular dynamics simulations at 300, 350 and 370 K show that this tethering system is stable. The rupture behaviour between the thiol and the surface was studied by displacing the free end of the thiol. Calculated energy profiles show a process of multiple successive ruptures that account for experimental observations. The process features successive ruptures of the two Au-S bonds followed by the extraction of one S-bonded Au atom from the surface. The force required to rupture the thiol from the surface was found to be dependent on the direction in which the thiol was displaced, with values comparable with AFM measurements. These results aid the understanding of failure dynamics of Au(111)-thiol-tethered biosurfaces in microfluidic devices where fluidic shear and normal forces are of concern

  2. The role of bed-parallel slip in the development of complex normal fault zones

    Science.gov (United States)

    Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros

    2017-04-01

    Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.

  3. Normal Isocurvature Surfaces and Special Isocurvature Circles (SIC)

    Science.gov (United States)

    Manoussakis, Gerassimos; Delikaraoglou, Demitris

    2010-05-01

    An isocurvature surface of a gravity field is a surface on which the value of the plumblines' curvature is constant. Here we are going to study the isocurvature surfaces of the Earth's normal gravity field. The normal gravity field is a symmetric gravity field therefore the isocurvature surfaces are surfaces of revolution. But even in this case the necessary relations for their study are not simple at all. Therefore to study an isocurvature surface we make special assumptions to form a vector equation which will hold only for a small coordinate patch of the isocurvature surface. Yet from the definition of the isocurvature surface and the properties of the normal gravity field is possible to express very interesting global geometrical properties of these surfaces without mixing surface differential calculus. The gradient of the plumblines' curvature function is vertical to an isocurvature surface. If P is a point of an isocurvature surface and "Φ" is the angle of the gradient of the plumblines' curvature with the equatorial plane then this direction points to the direction along which the curvature of the plumbline decreases / increases the most, and therefore is related to the strength of the normal gravity field. We will show that this direction is constant along a line of curvature of the isocurvature surface and this line is an isocurvature circle. In addition we will show that at each isocurvature surface there is at least one isocurvature circle along which the direction of the maximum variation of the plumblines' curvature function is parallel to the equatorial plane of the ellipsoid of revolution. This circle is defined as a Special Isocurvature Circle (SIC). Finally we shall prove that all these SIC lye on a special surface of revolution, the so - called SIC surface. That is to say, a SIC is not an isolated curve in the three dimensional space.

  4. A curvature theory for discrete surfaces based on mesh parallelity

    KAUST Repository

    Bobenko, Alexander Ivanovich; Pottmann, Helmut; Wallner, Johannes

    2009-01-01

    We consider a general theory of curvatures of discrete surfaces equipped with edgewise parallel Gauss images, and where mean and Gaussian curvatures of faces are derived from the faces' areas and mixed areas. Remarkably these notions are capable

  5. Surface tree languages and parallel derivation trees

    NARCIS (Netherlands)

    Engelfriet, Joost

    1976-01-01

    The surface tree languages obtained by top-down finite state transformation of monadic trees are exactly the frontier-preserving homomorphic images of sets of derivation trees of ETOL systems. The corresponding class of tree transformation languages is therefore equal to the class of ETOL languages.

  6. A curvature theory for discrete surfaces based on mesh parallelity

    KAUST Repository

    Bobenko, Alexander Ivanovich

    2009-12-18

    We consider a general theory of curvatures of discrete surfaces equipped with edgewise parallel Gauss images, and where mean and Gaussian curvatures of faces are derived from the faces\\' areas and mixed areas. Remarkably these notions are capable of unifying notable previously defined classes of surfaces, such as discrete isothermic minimal surfaces and surfaces of constant mean curvature. We discuss various types of natural Gauss images, the existence of principal curvatures, constant curvature surfaces, Christoffel duality, Koenigs nets, contact element nets, s-isothermic nets, and interesting special cases such as discrete Delaunay surfaces derived from elliptic billiards. © 2009 Springer-Verlag.

  7. Surface topography of parallel grinding process for nonaxisymmetric aspheric lens

    International Nuclear Information System (INIS)

    Zhang Ningning; Wang Zhenzhong; Pan Ri; Wang Chunjin; Guo Yinbiao

    2012-01-01

    Workpiece surface profile, texture and roughness can be predicted by modeling the topography of wheel surface and modeling kinematics of grinding process, which compose an important part of precision grinding process theory. Parallel grinding technology is an important method for nonaxisymmetric aspheric lens machining, but there is few report on relevant simulation. In this paper, a simulation method based on parallel grinding for precision machining of aspheric lens is proposed. The method combines modeling the random surface of wheel and modeling the single grain track based on arc wheel contact points. Then, a mathematical algorithm for surface topography is proposed and applied in conditions of different machining parameters. The consistence between the results of simulation and test proves that the algorithm is correct and efficient. (authors)

  8. Normal Incidence for Graded Index Surfaces

    Science.gov (United States)

    Khankhoje, Uday K.; Van Zyl, Jakob

    2011-01-01

    A plane wave is incident normally from vacuum (eta(sub 0) = 1) onto a smooth surface. The substrate has three layers; the top most layer has thickness d(sub 1) and permittivity epsilon(sub 1). The corresponding numbers for the next layer are d(sub 2); epsilon(sub 2), while the third layer which is semi-in nite has index eta(sub 3). The Hallikainen model [1] is used to relate volumetric soil moisture to the permittivity. Here, we consider the relation for the real part of the permittivity for a typical loam soil: acute epsilon(mv) = 2.8571 + 3.9678 x mv + 118:85 x mv(sup 2).

  9. Stress fields around a crack lying parallel to a free surface

    International Nuclear Information System (INIS)

    Higashida, Yutaka; Kamada, K.

    1980-12-01

    A method of stress analysis for a two dimentional crack, which is subjected to internal gas pressure, and situated parallel to a free surface of a material, is presented. It is based on the concept of continuously distributed edge dislocations of two kinds, i.e. one with Burgers vector normal to the free surface and the other with parallel to it. Stress fields of individual dislocations are chosen so as to satisfy stress free boundary conditions at the free surface, by taking account of image dislocations. Distributions of the both kinds of dislocations in the crack are derived so as to give the internal gas pressure and, at the same time, to satisfy shear stress free boundary condition on the crack surface. Stress fields σsub(xx), σsub(yy) and σsub(xy) in the sub-surface layer are then determined from them. They have square root singularities at the crack-tip. (author)

  10. Dynamic surface-pressure instrumentation for rods in parallel flow

    International Nuclear Information System (INIS)

    Mulcahy, T.M.; Lawrence, W.

    1979-01-01

    Methods employed and experience gained in measuring random fluid boundary layer pressures on the surface of a small diameter cylindrical rod subject to dense, nonhomogeneous, turbulent, parallel flow in a relatively noise-contaminated flow loop are described. Emphasis is placed on identification of instrumentation problems; description of transducer construction, mounting, and waterproofing; and the pretest calibration required to achieve instrumentation capable of reliable data acquisition

  11. Stability analysis of rough surfaces in adhesive normal contact

    Science.gov (United States)

    Rey, Valentine; Bleyer, Jeremy

    2018-03-01

    This paper deals with adhesive frictionless normal contact between one elastic flat solid and one stiff solid with rough surface. After computation of the equilibrium solution of the energy minimization principle and respecting the contact constraints, we aim at studying the stability of this equilibrium solution. This study of stability implies solving an eigenvalue problem with inequality constraints. To achieve this goal, we propose a proximal algorithm which enables qualifying the solution as stable or unstable and that gives the instability modes. This method has a low computational cost since no linear system inversion is required and is also suitable for parallel implementation. Illustrations are given for the Hertzian contact and for rough contact.

  12. Mechanics of curved surfaces, with application to surface-parallel cracks

    Science.gov (United States)

    Martel, Stephen J.

    2011-10-01

    The surfaces of many bodies are weakened by shallow enigmatic cracks that parallel the surface. A re-formulation of the static equilibrium equations in a curvilinear reference frame shows that a tension perpendicular to a traction-free surface can arise at shallow depths even under the influence of gravity. This condition occurs if σ11k1 + σ22k2 > ρg cosβ, where k1 and k2 are the principal curvatures (negative if convex) at the surface, σ11 and σ22 are tensile (positive) or compressive (negative) stresses parallel to the respective principal curvature arcs, ρ is material density, g is gravitational acceleration, and β is the surface slope. The curvature terms do not appear in equilibrium equations in a Cartesian reference frame. Compression parallel to a convex surface thus can cause subsurface cracks to open. A quantitative test of the relationship above accounts for where sheeting joints (prominent shallow surface-parallel fractures in rock) are abundant and for where they are scarce or absent in the varied topography of Yosemite National Park, resolving key aspects of a classic problem in geology: the formation of sheeting joints. Moreover, since the equilibrium equations are independent of rheology, the relationship above can be applied to delamination or spalling caused by surface-parallel cracks in many materials.

  13. Pair-breaking effects by parallel magnetic field in electric-field-induced surface superconductivity

    International Nuclear Information System (INIS)

    Nabeta, Masahiro; Tanaka, Kenta K.; Onari, Seiichiro; Ichioka, Masanori

    2016-01-01

    Highlights: • Zeeman effect shifts superconducting gaps of sub-band system, towards pair-breaking. • Higher-level sub-bands become normal-state-like electronic states by magnetic fields. • Magnetic field dependence of zero-energy DOS reflects multi-gap superconductivity. - Abstract: We study paramagnetic pair-breaking in electric-field-induced surface superconductivity, when magnetic field is applied parallel to the surface. The calculation is performed by Bogoliubov-de Gennes theory with s-wave pairing, including the screening effect of electric fields by the induced carriers near the surface. Due to the Zeeman shift by applied fields, electronic states at higher-level sub-bands become normal-state-like. Therefore, the magnetic field dependence of Fermi-energy density of states reflects the multi-gap structure in the surface superconductivity.

  14. Surface tension of normal and heavy water

    International Nuclear Information System (INIS)

    Straub, J.; Rosner, N.; Grigull, V.

    1980-01-01

    A Skeleton Table and simple interpolation equation for the surface tension of light water was developed by the Working Group III of the International Association for the Properties of Steam and is recommended as an International Standard. The Skeleton Table is based on all known measurements of the surface tension and individual data were weighted corresponding to the accuracy of the measurements. The form of the interpolation equation is based on a physical concept. It represents an extension of van der Waals-equation, where the exponent conforms to the 'Scaling Laws'. In addition for application purposes simple relations for the Laplace-coefficient and for the density difference between the liquid and gaseous phases of light water are given. The same form of interpolation equation for the surface tension can be used for heavy water, for which the coefficients are given. However, this equation is based only on a single set of data. (orig.) [de

  15. Symmetric and asymmetric capillary bridges between a rough surface and a parallel surface.

    Science.gov (United States)

    Wang, Yongxin; Michielsen, Stephen; Lee, Hoon Joo

    2013-09-03

    Although the formation of a capillary bridge between two parallel surfaces has been extensively studied, the majority of research has described only symmetric capillary bridges between two smooth surfaces. In this work, an instrument was built to form a capillary bridge by squeezing a liquid drop on one surface with another surface. An analytical solution that describes the shape of symmetric capillary bridges joining two smooth surfaces has been extended to bridges that are asymmetric about the midplane and to rough surfaces. The solution, given by elliptical integrals of the first and second kind, is consistent with a constant Laplace pressure over the entire surface and has been verified for water, Kaydol, and dodecane drops forming symmetric and asymmetric bridges between parallel smooth surfaces. This solution has been applied to asymmetric capillary bridges between a smooth surface and a rough fabric surface as well as symmetric bridges between two rough surfaces. These solutions have been experimentally verified, and good agreement has been found between predicted and experimental profiles for small drops where the effect of gravity is negligible. Finally, a protocol for determining the profile from the volume and height of the capillary bridge has been developed and experimentally verified.

  16. RPE cell surface proteins in normal and dystrophic rats

    International Nuclear Information System (INIS)

    Clark, V.M.; Hall, M.O.

    1986-01-01

    Membrane-bound proteins in plasma membrane enriched fractions from cultured rat RPE were analyzed by two-dimensional gel electrophoresis. Membrane proteins were characterized on three increasingly specific levels. Total protein was visualized by silver staining. A maximum of 102 separate proteins were counted in silver-stained gels. Glycoproteins were labeled with 3H-glucosamine or 3H-fucose and detected by autoradiography. Thirty-eight fucose-labeled and 61-71 glucosamine-labeled proteins were identified. All of the fucose-labeled proteins were labeled with glucosamine-derived radioactivity. Proteins exposed at the cell surface were labeled by lactoperoxidase-catalyzed radioiodination prior to preparation of membranes for two-dimensional analysis. Forty separate 125I-labeled surface proteins were resolved by two-dimensional electrophoresis/autoradiography. Comparison with the glycoprotein map showed that a number of these surface labeled proteins were glycoproteins. Two-dimensional maps of total protein, fucose-labeled, and glucosamine-labeled glycoproteins, and 125I-labeled surface proteins of membranes from dystrophic (RCS rdy-p+) and normal (Long Evans or RCS rdy+p+) RPE were compared. No differences in the total protein or surface-labeled proteins were observed. However, the results suggest that a 183K glycoprotein is more heavily glycosylated with glucosamine and fucose in normal RPE membranes as compared to membranes from dystrophic RPE

  17. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  18. Formation of Sheeting Joints as a Result of Compression Parallel to Convex Surfaces, With Examples from Yosemite National Park, California

    Science.gov (United States)

    Martel, S. J.

    2008-12-01

    The formation of sheeting joints has been an outstanding problem in geology. New observations and analyses indicate that sheeting joints develop in response to a near-surface tension induced by compressive stresses parallel to a convex slope (hypothesis 1) rather than by removal of overburden by erosion, as conventionally assumed (hypothesis 2). Opening mode displacements across the joints together with the absence of mineral precipitates within the joints mean that sheeting joints open in response to a near-surface tension normal to the surface rather than a pressurized fluid. Consideration of a plot of this tensile stress as a function of depth normal to the surface reveals that a true tension must arise in the shallow subsurface if the rate of that tensile stress change with depth is positive at the surface. Static equilibrium requires this rate (derivative) to equal P22 k2 + P33 k3 - ρ g cosβ, where k2 and k3 are the principal curvatures of the surface, P22 and P33 are the respective surface- parallel normal stresses along the principal curvatures, ρ is the material density, g is gravitational acceleration, and β is the slope. This derivative will be positive and sheeting joints can open if at least one principal curvature is sufficiently convex (negative) and the surface-parallel stresses are sufficiently compressive (negative). At several sites with sheeting joints (e.g., Yosemite National Park in California), the measured topographic curvatures and the measured surface-parallel stresses of about -10 MPa combine to meet this condition. In apparent violation of hypothesis 1, sheeting joints occur locally at the bottom of Tenaya Canyon, one of the deepest glaciated, U-shaped (concave) canyons in the park. The canyon-bottom sheeting joints only occur, however, where the canyon is convex downstream, a direction that nearly coincides with direction of the most compressive stress measured in the vicinity. The most compressive stress acting along the convex

  19. Body surface area prediction in normal, hypermuscular, and obese mice.

    Science.gov (United States)

    Cheung, Michael C; Spalding, Paul B; Gutierrez, Juan C; Balkan, Wayne; Namias, Nicholas; Koniaris, Leonidas G; Zimmers, Teresa A

    2009-05-15

    Accurate determination of body surface area (BSA) in experimental animals is essential for modeling effects of burn injury or drug metabolism. Two-dimensional surface area is related to three-dimensional body volume, which in turn can be estimated from body mass. The Meeh equation relates body surface area to the two-thirds power of body mass, through a constant, k, which must be determined empirically by species and size. We found older values of k overestimated BSA in certain mice; thus we determined empirically k for various strains of normal, obese, and hypermuscular mice. BSA was computed from digitally scanned pelts and nonlinear regression analysis was used to determine the best-fit k. The empirically determined k for C57BL/6J mice of 9.82 was not significantly different from other inbred and outbred mouse strains of normal body composition. However, mean k of the nearly spheroid, obese lepr(db/db) mice (k = 8.29) was significantly lower than for normals, as were values for dumbbell-shaped, hypermuscular mice with either targeted deletion of the myostatin gene (Mstn) (k = 8.48) or with skeletal muscle specific expression of a dominant negative myostatin receptor (Acvr2b) (k = 8.80). Hypermuscular and obese mice differ substantially from normals in shape and density, resulting in considerably altered k values. This suggests Meeh constants should be determined empirically for animals of altered body composition. Use of these new, improved Meeh constants will allow greater accuracy in experimental models of burn injury and pharmacokinetics.

  20. Growth of contact area between rough surfaces under normal stress

    Science.gov (United States)

    Stesky, R. M.; Hannan, S. S.

    1987-05-01

    The contact area between deforming rough surfaces in marble, alabaster, and quartz was measured from thin sections of surfaces bonded under load with low viscosity resin epoxy. The marble and alabaster samples had contact areas that increased with stress at an accelerating rate. This result suggests that the strength of the asperity contacts decreased progressively during the deformation, following some form of strain weakening relationship. This conclusion is supported by petrographic observation of the thin sections that indicate that much of the deformation was cataclastic, with minor twinning of calcite and kinking of gypsum. In the case of the quartz, the observed contact area was small and increased approximately linearly with normal stress. Only the irreversible cataclastic deformation was observed; however strain-induced birefringence and cracking of the epoxy, not observed with the other rocks, suggests that significant elastic deformation occurred, but recovered during unloading.

  1. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    Science.gov (United States)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  2. Contribution of diffuser surfaces to efficiency of tilted T shape parallel highway noise barriers

    Directory of Open Access Journals (Sweden)

    N. Javid Rouzi

    2009-04-01

    Full Text Available Background and aimsThe paper presents the results of an investigation on the acoustic  performance of tilted profile parallel barriers with quadratic residue diffuser tops and faces.MethodsA2D boundary element method (BEM is used to predict the barrier insertion loss. The results of rigid and with absorptive coverage are also calculated for comparisons. Using QRD on the top surface and faces of all tilted profile parallel barrier models introduced here is found to  improve the efficiency of barriers compared with rigid equivalent parallel barrier at the examined  receiver positions.Results Applying a QRD with frequency design of 400 Hz on 5 degrees tilted parallel barrier  improves the overall performance of its equivalent rigid barrier by 1.8 dB(A. Increase the treated surfaces with reactive elements shifts the effective performance toward lower frequencies. It is  found that by tilting the barriers from 0 to 10 degrees in parallel set up, the degradation effects in  parallel barriers is reduced but the absorption effect of fibrous materials and also diffusivity of thequadratic residue diffuser is reduced significantly. In this case all the designed barriers have better  performance with 10 degrees tilting in parallel set up.ConclusionThe most economic traffic noise parallel barrier, which produces significantly  high performance, is achieved by covering the top surface of the barrier closed to the receiver by  just a QRD with frequency design of 400 Hz and tilting angle of 10 degrees. The average Aweighted  insertion loss in this barrier is predicted to be 16.3 dB (A.

  3. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  4. X-ray diffraction study of surface-layer structure in parallel grazing rays

    International Nuclear Information System (INIS)

    Shtypulyak, N.I.; Yakimov, I.I.; Litvintsev, V.V.

    1989-01-01

    An x-ray diffraction method is described for study of thin polycrystalline and amorphous films and surface layers in an extremely asymmetrical diffraction system in parallel grazing rays using a DRON-3.0 diffractometer. The minimum grazing angles correspond to diffraction under conditions of total external reflection and a layer depth of ∼ 2.5-8 nm

  5. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  6. Fast parallel diffractive multi-beam femtosecond laser surface micro-structuring

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Kuang, E-mail: z.kuang@liv.ac.uk [Laser Group, Department of Engineering, University of Liverpool, Brodie Building, Liverpool L69 3GQ (United Kingdom); Dun Liu; Perrie, Walter; Edwardson, Stuart; Sharp, Martin; Fearon, Eamonn; Dearden, Geoff; Watkins, Ken [Laser Group, Department of Engineering, University of Liverpool, Brodie Building, Liverpool L69 3GQ (United Kingdom)

    2009-04-15

    Fast parallel femtosecond laser surface micro-structuring is demonstrated using a spatial light modulator (SLM). The Gratings and Lenses algorithm, which is simple and computationally fast, is used to calculate computer generated holograms (CGHs) producing diffractive multiple beams for the parallel processing. The results show that the finite laser bandwidth can significantly alter the intensity distribution of diffracted beams at higher angles resulting in elongated hole shapes. In addition, by synchronisation of applied CGHs and the scanning system, true 3D micro-structures are created on Ti6Al4V.

  7. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  8. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  9. Memory effect on energy losses of charged particles moving parallel to solid surface

    International Nuclear Information System (INIS)

    Kwei, C.M.; Tu, Y.H.; Hsu, Y.H.; Tung, C.J.

    2006-01-01

    Theoretical derivations were made for the induced potential and the stopping power of a charged particle moving close and parallel to the surface of a solid. It was illustrated that the induced potential produced by the interaction of particle and solid depended not only on the velocity but also on the previous velocity of the particle before its last inelastic interaction. Another words, the particle kept a memory on its previous velocity, v , in determining the stopping power for the particle of velocity v. Based on the dielectric response theory, formulas were derived for the induced potential and the stopping power with memory effect. An extended Drude dielectric function with spatial dispersion was used in the application of these formulas for a proton moving parallel to Si surface. It was found that the induced potential with memory effect lay between induced potentials without memory effect for constant velocities v and v. The memory effect was manifest as the proton changes its velocity in the previous inelastic interaction. This memory effect also reduced the stopping power of the proton. The formulas derived in the present work can be applied to any solid surface and charged particle moving with arbitrary parallel trajectory either inside or outside the solid

  10. A Case Study of a Hybrid Parallel 3D Surface Rendering Graphics Architecture

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik; Madsen, Jan; Pedersen, Steen

    1997-01-01

    This paper presents a case study in the design strategy used inbuilding a graphics computer, for drawing very complex 3Dgeometric surfaces. The goal is to build a PC based computer systemcapable of handling surfaces built from about 2 million triangles, andto be able to render a perspective view...... of these on a computer displayat interactive frame rates, i.e. processing around 50 milliontriangles per second. The paper presents a hardware/softwarearchitecture called HPGA (Hybrid Parallel Graphics Architecture) whichis likely to be able to carry out this task. The case study focuses ontechniques to increase...

  11. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  12. Determination of Optimum Viewing Angles for the Angular Normalization of Land Surface Temperature over Vegetated Surface

    Directory of Open Access Journals (Sweden)

    Huazhong Ren

    2015-03-01

    Full Text Available Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF is first extended to the thermal infrared (TIR domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors.

  13. Near surface silicide formation after off-normal Fe-implantation of Si(001) surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Khanbabaee, B., E-mail: khanbabaee@physik.uni-siegen.de; Pietsch, U. [Solid State Physics, University of Siegen, D-57068 Siegen (Germany); Lützenkirchen-Hecht, D. [Fachbereich C - Physik, Bergische Universität Wuppertal, D-42097 Wuppertal (Germany); Hübner, R.; Grenzer, J.; Facsko, S. [Helmholtz-Zentrum Dresden-Rossendorf, 01314 Dresden (Germany)

    2014-07-14

    We report on formation of non-crystalline Fe-silicides of various stoichiometries below the amorphized surface of crystalline Si(001) after irradiation with 5 keV Fe{sup +} ions under off-normal incidence. We examined samples prepared with ion fluences of 0.1 × 10{sup 17} and 5 × 10{sup 17} ions cm{sup −2} exhibiting a flat and patterned surface morphology, respectively. Whereas the iron silicides are found across the whole surface of the flat sample, they are concentrated at the top of ridges at the rippled surface. A depth resolved analysis of the chemical states of Si and Fe atoms in the near surface region was performed by combining X-ray photoelectron spectroscopy and X-ray absorption spectroscopy (XAS) using synchrotron radiation. The chemical shift and the line shape of the Si 2p core levels and valence bands were measured and associated with the formation of silicide bonds of different stoichiometric composition changing from an Fe-rich silicides (Fe{sub 3}Si) close to the surface into a Si-rich silicide (FeSi{sub 2}) towards the inner interface to the Si(001) substrate. This finding is supported by XAS analysis at the Fe K-edge which shows changes of the chemical environment and the near order atomic coordination of the Fe atoms in the region close to surface. Because a similar Fe depth profile has been found for samples co-sputtered with Fe during Kr{sup +} ion irradiation, our results suggest the importance of chemically bonded Fe in the surface region for the process of ripple formation.

  14. X-ray diffractometric study on the near-surface layer structure in parallel glancing rays

    International Nuclear Information System (INIS)

    Shtypulyak, N.I.; Yakimov, I.I.; Litvintsev, V.V.

    1988-01-01

    X-ray diffraction method is suggested to use to investigate thin films and near-surface layers under the conditions of total external reflection (TER) and in the geometry of parallel glancing rays. Experimental realization of the method using the DRON-30 diffractometer is described. Calculation for the required width of the aperture of Soller slot system is presented. The described diffraction scheme is used to investigate thin film crystal structure at glancing angles in the range from TER up to 8-10 deg. The thickness of the investigated layer in this case changes from 2.5-8 nm up to 10 3 nm. The suggested diffraction method in parallel glancing rays is especially important when investigating the films with thickness lower than 1000-2000A

  15. Pros and cons of rotating ground motion records to fault-normal/parallel directions for response history analysis of buildings

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2014-01-01

    According to the regulatory building codes in the United States (e.g., 2010 California Building Code), at least two horizontal ground motion components are required for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHAs should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here, for the first time, using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak values of engineering demand parameters (EDPs) were computed for rotation angles ranging from 0 through 180° to quantify the difference between peak values of EDPs over all rotation angles and those due to FN/FP direction rotated motions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  16. Preliminary surface analysis of etched, bleached, and normal bovine enamel

    International Nuclear Information System (INIS)

    Ruse, N.D.; Smith, D.C.; Torneck, C.D.; Titley, K.C.

    1990-01-01

    X-ray photoelectron spectroscopic (XPS) and secondary ion-mass spectroscopic (SIMS) analyses were performed on unground un-pumiced, unground pumiced, and ground labial enamel surfaces of young bovine incisors exposed to four different treatments: (1) immersion in 35% H2O2 for 60 min; (2) immersion in 37% H3PO4 for 60 s; (3) immersion in 35% H2O2 for 60 min, in distilled water for two min, and in 37% H3PO4 for 60 s; (4) immersion in 37% H3PO4 for 60 s, in distilled water for two min, and in 35% H2O2 for 60 min. Untreated unground un-pumiced, unground pumiced, and ground enamel surfaces, as well as synthetic hydroxyapatite surfaces, served as controls for intra-tooth evaluations of the effects of different treatments. The analyses indicated that exposure to 35% H2O2 alone, besides increasing the nitrogen content, produced no other significant change in the elemental composition of any of the enamel surfaces investigated. Exposure to 37% H3PO4, however, produced a marked decrease in calcium (Ca) and phosphorus (P) concentrations and an increase in carbon (C) and nitrogen (N) concentrations in unground un-pumiced specimens only, and a decrease in C concentration in ground specimens. These results suggest that the reported decrease in the adhesive bond strength of resin to 35% H2O2-treated enamel is not caused by a change in the elemental composition of treated enamel surfaces. They also suggest that an organic-rich layer, unaffected by acid-etching, may be present on the unground un-pumiced surface of young bovine incisors. This layer can be removed by thorough pumicing or by grinding. An awareness of its presence is important when young bovine teeth are used in a model system for evaluation of resin adhesiveness

  17. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  18. Surface Casimir densities and induced cosmological constant on parallel branes in AdS spacetime

    International Nuclear Information System (INIS)

    Saharian, Aram A.

    2004-01-01

    Vacuum expectation value of the surface energy-momentum tensor is evaluated for a massive scalar field with general curvature coupling parameter subject to Robin boundary conditions on two parallel branes located on (D+1)-dimensional anti-de Sitter bulk. The general case of different Robin coefficients on separate branes is considered. As a regularization procedure the generalized zeta function technique is used, in combination with contour integral representations. The surface energies on the branes are presented in the form of the sums of single brane and second brane-induced parts. For the geometry of a single brane both regions, on the left (L-region) and on the right (R-region), of the brane are considered. The surface densities for separate L- and R-regions contain pole and finite contributions. For an infinitely thin brane taking these regions together, in odd spatial dimensions the pole parts cancel and the total surface energy is finite. The parts in the surface densities generated by the presence of the second brane are finite for all nonzero values of the interbrane separation. It is shown that for large distances between the branes the induced surface densities give rise to an exponentially suppressed cosmological constant on the brane. In the Randall-Sundrum braneworld model, for the interbrane distances solving the hierarchy problem between the gravitational and electroweak mass scales, the cosmological constant generated on the visible brane is of the right order of magnitude with the value suggested by the cosmological observations

  19. Lining cells on normal human vertebral bone surfaces

    International Nuclear Information System (INIS)

    Henning, C.B.; Lloyd, E.L.

    1982-01-01

    Thoracic vertebrae from two individuals with no bone disease were studied with the electron microscope to determine cell morphology in relation to bone mineral. The work was undertaken to determine if cell morphology or spatial relationships between the bone lining cells and bone mineral could account for the relative infrequency of bone tumors which arise at this site following radium intake, when compared with other sites, such as the head of the femur. Cells lining the vertebral mineral were found to be generally rounded in appearance with varied numbers of cytoplasmic granules, and they appeared to have a high density per unit of surface area. These features contrasted with the single layer of flattened cells characteristic of the bone lining cells of the femur. A tentative discussion of the reasons for the relative infrequency of tumors in the vertebrae following radium acquisition is presented

  20. Solving very large scattering problems using a parallel PWTD-enhanced surface integral equation solver

    KAUST Repository

    Liu, Yang

    2013-07-01

    The computational complexity and memory requirements of multilevel plane wave time domain (PWTD)-accelerated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(NtNs(log 2)Ns) and O(Ns 1.5); here N t and Ns denote numbers of temporal and spatial basis functions discretizing the current [Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003]. In the past, serial versions of these solvers have been successfully applied to the analysis of scattering from perfect electrically conducting as well as homogeneous penetrable targets involving up to Ns ≈ 0.5 × 106 and Nt ≈ 10 3. To solve larger problems, parallel PWTD-enhanced MOT solvers are called for. Even though a simple parallelization strategy was demonstrated in the context of electromagnetic compatibility analysis [M. Lu et al., in Proc. IEEE Int. Symp. AP-S, 4, 4212-4215, 2004], by and large, progress in this area has been slow. The lack of progress can be attributed wholesale to difficulties associated with the construction of a scalable PWTD kernel. © 2013 IEEE.

  1. Parallel Computation of RCS of Electrically Large Platform with Coatings Modeled with NURBS Surfaces

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2012-01-01

    Full Text Available The significance of Radar Cross Section (RCS in the military applications makes its prediction an important problem. This paper uses large-scale parallel Physical Optics (PO to realize the fast computation of RCS to electrically large targets, which are modeled by Non-Uniform Rational B-Spline (NURBS surfaces and coated with dielectric materials. Some numerical examples are presented to validate this paper’s method. In addition, 1024 CPUs are used in Shanghai Supercomputer Center (SSC to perform the simulation of a model with the maximum electrical size 1966.7 λ for the first time in China. From which, it can be found that this paper’s method can greatly speed the calculation and is capable of solving the real-life problem of RCS prediction.

  2. A Study of Parallels Between Antarctica South Pole Traverse Equipment and Lunar/Mars Surface Systems

    Science.gov (United States)

    Mueller, Robert P.; Hoffman, Stephen, J.; Thur, Paul

    2010-01-01

    The parallels between an actual Antarctica South Pole re-supply traverse conducted by the National Science Foundation (NSF) Office of Polar Programs in 2009 have been studied with respect to the latest mission architecture concepts being generated by the United States National Aeronautics and Space Administration (NASA) for lunar and Mars surface systems scenarios. The challenges faced by both endeavors are similar since they must both deliver equipment and supplies to support operations in an extreme environment with little margin for error in order to be successful. By carefully and closely monitoring the manifesting and operational support equipment lists which will enable this South Pole traverse, functional areas have been identified. The equipment required to support these functions will be listed with relevant properties such as mass, volume, spare parts and maintenance schedules. This equipment will be compared to space systems currently in use and projected to be required to support equivalent and parallel functions in Lunar and Mars missions in order to provide a level of realistic benchmarking. Space operations have historically required significant amounts of support equipment and tools to operate and maintain the space systems that are the primary focus of the mission. By gaining insight and expertise in Antarctic South Pole traverses, space missions can use the experience gained over the last half century of Antarctic operations in order to design for operations, maintenance, dual use, robustness and safety which will result in a more cost effective, user friendly, and lower risk surface system on the Moon and Mars. It is anticipated that the U.S Antarctic Program (USAP) will also realize benefits for this interaction with NASA in at least two areas: an understanding of how NASA plans and carries out its missions and possible improved efficiency through factors such as weight savings, alternative technologies, or modifications in training and

  3. Crossover from normal (N) Ohmic subdivision to superconducting (S) equipartition of current in parallel conductors at the N-S transition: Theory

    OpenAIRE

    Kumar, N.

    2007-01-01

    The recently observed (1) equipartition of current in parallel at and below the Normal-Superconducting (N-S) transition can be understood in terms of a Landau-Ginzburg order-parameter phenomenology. This complements the explanation proposed earlier (1) based on the flux-flow resistance providing a nonlinear negative current feedback towards equipartition when the transition is approached from above. The present treatment also unifies the usual textbook inductive subdivision expected much belo...

  4. Precise on-machine extraction of the surface normal vector using an eddy current sensor array

    International Nuclear Information System (INIS)

    Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun

    2016-01-01

    To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces. (paper)

  5. Precise on-machine extraction of the surface normal vector using an eddy current sensor array

    Science.gov (United States)

    Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun

    2016-11-01

    To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.

  6. Air bubble-induced detachment of polystyrene particles with different sizes from collector surfaces in a parallel plate flow chamber

    NARCIS (Netherlands)

    Gomez-Suarez, C; van der Mei, HC; Busscher, HJ

    2001-01-01

    Particle size was found to be an important factor in air bubble-induced detachment of colloidal particles from collector surfaces in a parallel plate flow chamber and generally polystyrene particles with a diameter of 806 nm detached less than particles with a diameter of 1400 nm. Particle

  7. Comparison of waxy and normal potato starch remaining granules after chemical surface gelatinization: Pasting behavior and surface morphology

    NARCIS (Netherlands)

    Huang, J.; Chen Zenghong,; Xu, Yalun; Li, Hongliang; Liu, Shuxing; Yang, Daqing; Schols, H.A.

    2014-01-01

    o understand the contribution of granule inner portion to the pasting property of starch, waxy potato starch and two normal potato starches and their acetylated starch samples were subjected to chemical surface gelatinization by 3.8 mol/L CaCl2 to obtain remaining granules. Native and acetylated,

  8. Continuum modeling of ion-beam eroded surfaces under normal incidence: Impact of stochastic fluctuations

    International Nuclear Information System (INIS)

    Dreimann, Karsten; Linz, Stefan J.

    2010-01-01

    Graphical abstract: Deterministic surface pattern (left) and its stochastic counterpart (right) arising in a stochastic damped Kuramoto-Sivashinsky equation that serves as a model equation for ion-beam eroded surfaces and is systematically investigated. - Abstract: Using a recently proposed field equation for the surface evolution of ion-beam eroded semiconductor target materials under normal incidence, we systematically explore the impact of additive stochastic fluctuations that are permanently present during the erosion process. Specifically, we investigate the dependence of the surface roughness, the underlying pattern forming properties and the bifurcation behavior on the strength of the fluctuations.

  9. Detachment of polystyrene particles from collector surfaces by surface tension forces induced by air-bubble passage through a parallel plate flow chamber

    NARCIS (Netherlands)

    Wit, PJ; vanderMei, HC; Busscher, HJ

    1997-01-01

    By allowing an air-bubble to pass through a parallel plate flow chamber with negatively charged, colloidal polystyrene particles adhering to the bottom collector plate of the chamber, the detachment of adhering particles stimulated by surface tension forces induced by the passage of a liquid-air

  10. Asymptotic Normality of the Optimal Solution in Multiresponse Surface Mathematical Programming

    OpenAIRE

    Díaz-García, José A.; Caro-Lopera, Francisco J.

    2015-01-01

    An explicit form for the perturbation effect on the matrix of regression coeffi- cients on the optimal solution in multiresponse surface methodology is obtained in this paper. Then, the sensitivity analysis of the optimal solution is studied and the critical point characterisation of the convex program, associated with the optimum of a multiresponse surface, is also analysed. Finally, the asymptotic normality of the optimal solution is derived by the standard methods.

  11. Selecting the induction heating for normalization of deposited surfaces of cylindrical parts

    Directory of Open Access Journals (Sweden)

    Олена Валеріївна Бережна

    2017-07-01

    Full Text Available The machine parts recovered by electric contact surfacing with metal strip are characterized by high loading of the surface layer, which has a significant impact on their performance. Therefore, the improvement of the operational stability of fast-wearing machine parts through the use of combined treatment technologies is required. Not all the work-piece but just the worn zones are subjected to recovery with electric contact surfacing; the tape thickness and depth of the heat affected zone being not more than a few millimeters. Therefore, the most optimal in this case is the use of a local surface heating method of high frequency currents. This method has economical benefits because there is no need to heat the entire work-piece. The induction heating mode at a constant power density has been proposed and analytically investigated. The ratios that make it possible to determine the main heating parameters ensuring calculation of the inductor for the normalization of the reconstructed surface of cylindrical parts have been given. These parameters are: specific power, frequency and warm-up time. The proposed induction heating mode is intermediate between the quenching and cross-cutting heating and makes it possible to simultaneously obtain the required temperatures at the surface and at the predetermined depth of the heated layer of cylindrical parts with the normalization of their surfaces restored with electric contact surfacing

  12. Normal Contacts of Lubricated Fractal Rough Surfaces at the Atomic Scale

    NARCIS (Netherlands)

    Solhjoo, Soheil; Vakis, Antonis I.

    The friction of contacting interfaces is a function of surface roughness and applied normal load. Under boundary lubrication, this frictional behavior changes as a function of lubricant wettability, viscosity, and density, by practically decreasing the possibility of dry contact. Many studies on

  13. Anti-parallel polarization switching in a triglycine sulfate organic ferroelectric insulator: The role of surface charges

    Science.gov (United States)

    Ma, He; Wu, Zhuangchun; Peng, Dongwen; Wang, Yaojin; Wang, Yiping; Yang, Ying; Yuan, Guoliang

    2018-04-01

    Four consecutive ferroelectric polarization switchings and an abnormal ring-like domain pattern can be introduced by a single tip bias of a piezoresponse force microscope in the (010) triglycine sulfate (TGS) crystal. The external electric field anti-parallel to the original polarization induces the first polarization switching; however, the surface charges of TGS can move toward the tip location and induce the second polarization switching once the tip bias is removed. The two switchings allow a ring-like pattern composed of the central domain with downward polarization and the outer domain with upward polarization. Once the two domains disappear gradually as a result of depolarization, the other two polarization switchings occur one by one at the TGS where the tip contacts. However, the backswitching phenomenon does not occur when the external electric field is parallel to the original polarization. These results can be explained according to the surface charges instead of the charges injected inside.

  14. Surface structures of normal paraffins and cyclohexane monolayers and thin crystals grown on the (111) crystal face of platinum. A low-energy electron diffraction study

    International Nuclear Information System (INIS)

    Firment, L.E.; Somorjai, G.A.

    1977-01-01

    The surfaces of the normal paraffins (C 3 --C 8 ) and cyclohexane have been studied using low-energy electron diffraction (LEED). The samples were prepared by vapor deposition on the (111) face of a platinum single crystal in ultrahigh vacuum, and were studied both as thick films and as adsorbed monolayers. These molecules form ordered monolayers on the clean metal surface in the temperature range 100--220 K and at a vapor flux corresponding to 10 -7 Torr. In the adsorbed monolayers of the normal paraffins (C 4 --C 8 ), the molecules lie with their chain axes parallel to the Pt surface and Pt[110]. The paraffin monolayer structures undergo order--disorder transitions as a function of temperature. Multilayers condensed upon the ordered monolayers maintained the same orientation and packing as found in the monolayers. The surface structures of the growing organic crystals do not corresond to planes in their reported bulk crystal structures and are evidence for epitaxial growth of pseudomorphic crystal forms. Multilayers of n-octane and n-heptane condensed upon disordered monolayers have also grown with the (001) plane of the triclinic bulk crystal structures parallel to the surface. n-Butane has three monolayer structures on Pt(111) and one of the three is maintained during growth of the crystal. Cyclohexane forms an ordered monolayer, upon which a multilayer of cyclohexane grows exhibiting the (001) surface orientation of the monoclinic bulk crystal structure. Surface structures of saturated hydrocarbons are found to be very susceptible to electron beam induced damage. Surface charging interferes with LEED only at sample thicknesses greater than 200 A

  15. Large enhancement of thermoelectric effects in a tunneling-coupled parallel DQD-AB ring attached to one normal and one superconducting lead

    Science.gov (United States)

    Yao, Hui; Zhang, Chao; Li, Zhi-Jian; Nie, Yi-Hang; Niu, Peng-bin

    2018-05-01

    We theoretically investigate the thermoelectric properties in a tunneling-coupled parallel DQD-AB ring attached to one normal and one superconducting lead. The role of the intrinsic and extrinsic parameters in improving thermoelectric properties is discussed. The peak value of figure of merit near gap edges increases with the asymmetry parameter decreasing, particularly, when asymmetry parameter is less than 0.5, the figure of merit near gap edges rapidly rises. When the interdot coupling strengh is less than the superconducting gap the thermopower spectrum presents a single-platform structure. While when the interdot coupling strengh is larger than the gap, a double-platform structure appears in thermopower spectrum. Outside the gap the peak values of figure of merit might reach the order of 102. On the basis of optimizing internal parameters the thermoelectric conversion efficiency of the device can be further improved by appropriately matching the total magnetic flux and the flux difference between two subrings.

  16. Modeling of normal contact of elastic bodies with surface relief taken into account

    Science.gov (United States)

    Goryacheva, I. G.; Tsukanov, I. Yu

    2018-04-01

    An approach to account the surface relief in normal contact problems for rough bodies on the basis of an additional displacement function for asperities is considered. The method and analytic expressions for calculating the additional displacement function for one-scale and two-scale wavy relief are presented. The influence of the microrelief geometric parameters, including the number of scales and asperities density, on additional displacements of the rough layer is analyzed.

  17. Improved Topographic Normalization for Landsat TM Images by Introducing the MODIS Surface BRDF

    Directory of Open Access Journals (Sweden)

    Yanli Zhang

    2015-05-01

    Full Text Available In rugged terrain, the accuracy of surface reflectance estimations is compromised by atmospheric and topographic effects. We propose a new method to simultaneously eliminate atmospheric and terrain effects in Landsat Thematic Mapper (TM images based on a 30 m digital elevation model (DEM and Moderate Resolution Imaging Spectroradiometer (MODIS atmospheric products. Moreover, we define a normalized factor of a Bidirectional Reflectance Distribution Function (BRDF to convert the sloping pixel reflectance into a flat pixel reflectance by using the Ross Thick-Li Sparse BRDF model (Ambrals algorithm and MODIS BRDF/albedo kernel coefficient products. Sole atmospheric correction and topographic normalization were performed for TM images in the upper stream of the Heihe River Basin. The results show that using MODIS atmospheric products can effectively remove atmospheric effects compared with the Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH model and the Landsat Climate Data Record (CDR. Moreover, superior topographic effect removal can be achieved by considering the surface BRDF when compared with the surface Lambertian assumption of topographic normalization.

  18. Renal function maturation in children: is normalization to surface area valid?

    International Nuclear Information System (INIS)

    Rutland, M.D.; Hassan, I.M.; Que, L.

    1999-01-01

    Full text: Gamma camera DTPA renograms were analysed to measure renal function by the rate at which the kidneys took up tracer from the blood. This was expressed either directly as the fractional uptake rate (FUR), which is not related to body size, or it was converted to a camera-based GFR by the formula GFR blood volume x FUR, and this GFR was normalized to a body surface area of 1.73 m2. Most of the patients studied had one completely normal kidney, and one kidney with reflux but normal function and no large scars. The completely normal kidneys contributed, on average, 50% of the total renal function. The results were considered in age bands, to display the effect of age on renal function. The camera-GFR measurements showed the conventional results of poor renal function in early childhood, with a slow rise to near-adult values by the age of 2 years, and somewhat low values throughout childhood. The uptake values showed a different pattern, with renal function rising to adult equivalent values by the age of 4 months, and with children having better renal function than adults throughout most of their childhood. The standard deviations expressed as coefficients of variation (CV) were smaller for the FUR technique than the GFR (Wilcoxon rank test, P < 0.01). These results resemble recent published measurements of absolute DMSA uptake, which are also unrelated to body size and show early renal maturation. The results also suggest that the reason children have lower serum creatinine levels than adults is that they have better renal function. If this were confirmed, it would raise doubts about the usefulness of normalizing renal function to body surface area in children

  19. Dynamics of an optically confined nanoparticle diffusing normal to a surface.

    Science.gov (United States)

    Schein, Perry; O'Dell, Dakota; Erickson, David

    2016-06-01

    Here we measure the hindered diffusion of an optically confined nanoparticle in the direction normal to a surface, and we use this to determine the particle-surface interaction profile in terms of the absolute height. These studies are performed using the evanescent field of an optically excited single-mode silicon nitride waveguide, where the particle is confined in a height-dependent potential energy well generated from the balance of optical gradient and surface forces. Using a high-speed cmos camera, we demonstrate the ability to capture the short time-scale diffusion dominated motion for 800-nm-diam polystyrene particles, with measurement times of only a few seconds per particle. Using established theory, we show how this information can be used to estimate the equilibrium separation of the particle from the surface. As this measurement can be made simultaneously with equilibrium statistical mechanical measurements of the particle-surface interaction energy landscape, we demonstrate the ability to determine these in terms of the absolute rather than relative separation height. This enables the comparison of potential energy landscapes of particle-surface interactions measured under different experimental conditions, enhancing the utility of this technique.

  20. Evolution of the Contact Area with Normal Load for Rough Surfaces: from Atomic to Macroscopic Scales.

    Science.gov (United States)

    Huang, Shiping

    2017-11-13

    The evolution of the contact area with normal load for rough surfaces has great fundamental and practical importance, ranging from earthquake dynamics to machine wear. This work bridges the gap between the atomic scale and the macroscopic scale for normal contact behavior. The real contact area, which is formed by a large ensemble of discrete contacts (clusters), is proven to be much smaller than the apparent surface area. The distribution of the discrete contact clusters and the interaction between them are key to revealing the mechanism of the contacting solids. To this end, Green's function molecular dynamics (GFMD) is used to study both how the contact cluster evolves from the atomic scale to the macroscopic scale and the interaction between clusters. It is found that the interaction between clusters has a strong effect on their formation. The formation and distribution of the contact clusters is far more complicated than that predicted by the asperity model. Ignorance of the interaction between them leads to overestimating the contacting force. In real contact, contacting clusters are smaller and more discrete due to the interaction between the asperities. Understanding the exact nature of the contact area with the normal load is essential to the following research on friction.

  1. Trajectories of cortical surface area and cortical volume maturation in normal brain development

    Directory of Open Access Journals (Sweden)

    Simon Ducharme

    2015-12-01

    Full Text Available This is a report of developmental trajectories of cortical surface area and cortical volume in the NIH MRI Study of Normal Brain Development. The quality-controlled sample included 384 individual typically-developing subjects with repeated scanning (1–3 per subject, total scans n=753 from 4.9 to 22.3 years of age. The best-fit model (cubic, quadratic, or first-order linear was identified at each vertex using mixed-effects models, with statistical correction for multiple comparisons using random field theory. Analyses were performed with and without controlling for total brain volume. These data are provided for reference and comparison with other databases. Further discussion and interpretation on cortical developmental trajectories can be found in the associated Ducharme et al.׳s article “Trajectories of cortical thickness maturation in normal brain development – the importance of quality control procedures” (Ducharme et al., 2015 [1].

  2. A massively parallel GPU-accelerated model for analysis of fully nonlinear free surface waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Madsen, Morten G.; Glimberg, Stefan Lemvig

    2011-01-01

    -storage flexible-order accurate finite difference method that is known to be efficient and scalable on a CPU core (single thread). To achieve parallel performance of the relatively complex numerical model, we investigate a new trend in high-performance computing where many-core GPUs are utilized as high......-throughput co-processors to the CPU. We describe and demonstrate how this approach makes it possible to do fast desktop computations for large nonlinear wave problems in numerical wave tanks (NWTs) with close to 50/100 million total grid points in double/ single precision with 4 GB global device memory...... available. A new code base has been developed in C++ and compute unified device architecture C and is found to improve the runtime more than an order in magnitude in double precision arithmetic for the same accuracy over an existing CPU (single thread) Fortran 90 code when executed on a single modern GPU...

  3. Message-passing-interface-based parallel FDTD investigation on the EM scattering from a 1-D rough sea surface using uniaxial perfectly matched layer absorbing boundary.

    Science.gov (United States)

    Li, J; Guo, L-X; Zeng, H; Han, X-B

    2009-06-01

    A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.

  4. Internal structure of normal maize starch granules revealed by chemical surface gelatinization.

    Science.gov (United States)

    Pan, D D; Jane, J I

    2000-01-01

    Normal maize starch was fractionated into two sizes: large granules with diameters more than 5 microns and small granules with diameters less than 5 microns. The large granules were surface gelatinized by treating them with an aqueous LiCl solution (13 M) at 22-23 degrees C. Surface-gelatinized remaining granules were obtained by mechanical blending, and gelatinized surface starch was obtained by grinding with a mortar and a pestle. Starches of different granular sizes and radial locations, obtained after different degrees of surface gelatinization, were subjected to scanning electron microscopy, iodine potentiometric titration, gel-permeation chromatography, and amylopectin branch chain length analysis. Results showed that the remaining granules had a rough surface with a lamella structure. Amylose was more concentrated at the periphery than at the core of the granule. Amylopectin had longer long B-chains at the core than at the periphery of the granule. Greater proportions of the long B-chains were present at the core than at the periphery of the granule.

  5. Normal appearance of the prostate and seminal tract: MR imaging using an endorectal surface coil

    International Nuclear Information System (INIS)

    Kim, Myeong Jin; Lee, Jong Tae; Lee, Moo Sang; Choi, Pil Sik; Hong, Sung Joon; Lee, Yeon Hee; Choi, Hak Yong

    1994-01-01

    To assess the ability of MR imaging with an endorectal surface coil for the depiction of normal anatomical structure of prostate and its adjacent organs. MR imaging using an endorectal surface coil was performed in 23 male patients(age ; 20-75) to evaluate various prostatic and vasovesicular disorders, i. e, 14 cases of ejaculatory problems, 3 cases of hypogonadism, and 4 cases of prostatic cancers and 2 cases of benign prostatic hyperplasia. MR images were obtained with axial, sagittal and coronal fast spin echo long TR/TE images and axial spin echo short TR/TE images. Field of views was 10-12 cm and scan thickness was 3-5 mm. Depiction of normal anatomcial structures was excellent in all cases. On T2WI, zonal anatomy of the prostate and prostatic urethra, urethral crest, and ejaculatory duct were cleary visualized. On T1WI, periprostatic fat plane is more cleary visualized. On transverse images, periprostatic structures were well visualized on T1WI,and on T2WI, anterior fibromuscular stroma, transition zone and peripheral zone could be readily differentiated. Coronal images were more helpful in visualization of both central and peripheral zones. Vas deferens, ejaculatory duct and vermontanum were also more easily defined on these images. Sagittal images was helpful in the depiction of anterior fibromuscular stroma, central zone and peripheral zone with prostatic urethra and ejaculatory duct in a single plane. High resolution MR imaging with an endorectal surface coil can readily visualize the normal anatomy of the prostate and its related structures and may be useful in the evaluation of various diseases of prostate and vasvesicular system

  6. Normal appearance of the prostate and seminal tract: MR imaging using an endorectal surface coil

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myeong Jin; Lee, Jong Tae; Lee, Moo Sang; Choi, Pil Sik; Hong, Sung Joon; Lee, Yeon Hee; Choi, Hak Yong [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    1994-06-15

    To assess the ability of MR imaging with an endorectal surface coil for the depiction of normal anatomical structure of prostate and its adjacent organs. MR imaging using an endorectal surface coil was performed in 23 male patients(age ; 20-75) to evaluate various prostatic and vasovesicular disorders, i. e, 14 cases of ejaculatory problems, 3 cases of hypogonadism, and 4 cases of prostatic cancers and 2 cases of benign prostatic hyperplasia. MR images were obtained with axial, sagittal and coronal fast spin echo long TR/TE images and axial spin echo short TR/TE images. Field of views was 10-12 cm and scan thickness was 3-5 mm. Depiction of normal anatomcial structures was excellent in all cases. On T2WI, zonal anatomy of the prostate and prostatic urethra, urethral crest, and ejaculatory duct were cleary visualized. On T1WI, periprostatic fat plane is more cleary visualized. On transverse images, periprostatic structures were well visualized on T1WI,and on T2WI, anterior fibromuscular stroma, transition zone and peripheral zone could be readily differentiated. Coronal images were more helpful in visualization of both central and peripheral zones. Vas deferens, ejaculatory duct and vermontanum were also more easily defined on these images. Sagittal images was helpful in the depiction of anterior fibromuscular stroma, central zone and peripheral zone with prostatic urethra and ejaculatory duct in a single plane. High resolution MR imaging with an endorectal surface coil can readily visualize the normal anatomy of the prostate and its related structures and may be useful in the evaluation of various diseases of prostate and vasvesicular system.

  7. Surface Reconstruction from Parallel Curves with Application to Parietal Bone Fracture Reconstruction.

    Directory of Open Access Journals (Sweden)

    Abdul Majeed

    Full Text Available Maxillofacial trauma are common, secondary to road traffic accident, sports injury, falls and require sophisticated radiological imaging to precisely diagnose. A direct surgical reconstruction is complex and require clinical expertise. Bio-modelling helps in reconstructing surface model from 2D contours. In this manuscript we have constructed the 3D surface using 2D Computerized Tomography (CT scan contours. The fracture part of the cranial vault are reconstructed using GC1 rational cubic Ball curve with three free parameters, later the 2D contours are flipped into 3D with equidistant z component. The constructed surface is represented by contours blending interpolant. At the end of this manuscript a case report of parietal bone fracture is also illustrated by employing this method with a Graphical User Interface (GUI illustration.

  8. Identification of surface species by vibrational normal mode analysis. A DFT study

    Science.gov (United States)

    Zhao, Zhi-Jian; Genest, Alexander; Rösch, Notker

    2017-10-01

    Infrared spectroscopy is an important experimental tool for identifying molecular species adsorbed on a metal surface that can be used in situ. Often vibrational modes in such IR spectra of surface species are assigned and identified by comparison with vibrational spectra of related (molecular) compounds of known structure, e. g., an organometallic cluster analogue. To check the validity of this strategy, we carried out a computational study where we compared the normal modes of three C2Hx species (x = 3, 4) in two types of systems, as adsorbates on the Pt(111) surface and as ligands in an organometallic cluster compound. The results of our DFT calculations reproduce the experimental observed frequencies with deviations of at most 50 cm-1. However, the frequencies of the C2Hx species in both types of systems have to be interpreted with due caution if the coordination mode is unknown. The comparative identification strategy works satisfactorily when the coordination mode of the molecular species (ethylidyne) is similar on the surface and in the metal cluster. However, large shifts are encountered when the molecular species (vinyl) exhibits different coordination modes on both types of substrates.

  9. Surface plasmon resonance biosensor for parallelized detection of protein biomarkers in diluted blood plasma

    Czech Academy of Sciences Publication Activity Database

    Piliarik, Marek; Bocková, Markéta; Homola, Jiří

    2010-01-01

    Roč. 26, č. 4 (2010), s. 1656-1661 ISSN 0956-5663 R&D Projects: GA AV ČR KAN200670701 Institutional research plan: CEZ:AV0Z20670512 Keywords : Surface plasmon resonance * Protein array * Cancer marker Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 5.361, year: 2010

  10. From Intensity Profile to Surface Normal: Photometric Stereo for Unknown Light Sources and Isotropic Reflectances.

    Science.gov (United States)

    Lu, Feng; Matsushita, Yasuyuki; Sato, Imari; Okabe, Takahiro; Sato, Yoichi

    2015-10-01

    We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel under unknown varying directional illumination. We show that for general isotropic materials and uniformly distributed light directions, the geodesic distance between intensity profiles is linearly related to the angular difference of their corresponding surface normals, and that the intensity distribution of the intensity profile reveals reflectance properties. Based on these observations, we develop two methods for surface normal estimation; one for a general setting that uses only the recorded intensity profiles, the other for the case where a BRDF database is available while the exact BRDF of the target scene is still unknown. Quantitative and qualitative evaluations are conducted using both synthetic and real-world scenes, which show the state-of-the-art accuracy of smaller than 10 degree without using reference data and 5 degree with reference data for all 100 materials in MERL database.

  11. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    Science.gov (United States)

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  12. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments

    Science.gov (United States)

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  13. Bistatic scattering from a three-dimensional object above a two-dimensional randomly rough surface modeled with the parallel FDTD approach.

    Science.gov (United States)

    Guo, L-X; Li, J; Zeng, H

    2009-11-01

    We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

  14. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2012-01-01

    According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  15. Parallel Study of HEND, RAD, and DAN Instrument Response to Martian Radiation and Surface Conditions

    Science.gov (United States)

    Martiniez Sierra, Luz Maria; Jun, Insoo; Litvak, Maxim; Sanin, Anton; Mitrofanov, Igor; Zeitlin, Cary

    2015-01-01

    Nuclear detection methods are being used to understand the radiation environment at Mars. JPL (Jet Propulsion Laboratory) assets on Mars include: Orbiter -2001 Mars Odyssey [High Energy Neutron Detector (HEND)]; Mars Science Laboratory Rover -Curiosity [(Radiation Assessment Detector (RAD); Dynamic Albedo Neutron (DAN))]. Spacecraft have instruments able to detect ionizing and non-ionizing radiation. Instrument response on orbit and on the surface of Mars to space weather and local conditions [is discussed] - Data available at NASA-PDS (Planetary Data System).

  16. RF pulse methods for use with surface coils: Frequency-modulated pulses and parallel transmission

    Science.gov (United States)

    Garwood, Michael; Uğurbil, Kamil

    2018-06-01

    The first use of a surface coil to obtain a 31P NMR spectrum from an intact rat by Ackerman and colleagues initiated a revolution in magnetic resonance imaging (MRI) and spectroscopy (MRS). Today, we take it for granted that one can detect signals in regions external to an RF coil; at the time, however, this concept was most unusual. In the approximately four decade long period since its introduction, this simple idea gave birth to an increasing number of innovations that has led to transformative changes in the way we collect data in an in vivo magnetic resonance experiment, particularly with MRI of humans. These innovations include spatial localization and/or encoding based on the non-uniform B1 field generated by the surface coil, leading to new spectroscopic localization methods, image acceleration, and unique RF pulses that deal with B1 inhomogeneities and even reduce power deposition. Without the surface coil, many of the major technological advances that define the extraordinary success of MRI in clinical diagnosis and in biomedical research, as exemplified by projects like the Human Connectome Project, would not have been possible.

  17. Potential fields on the ventricular surface of the exposed dog heart during normal excitation.

    Science.gov (United States)

    Arisi, G; Macchi, E; Baruffi, S; Spaggiari, S; Taccardi, B

    1983-06-01

    We studied the normal spread of excitation on the anterior and posterior ventricular surface of open-chest dogs by recording unipolar electrograms from an array of 1124 electrodes spaced 2 mm apart. The array had the shape of the ventricular surface of the heart. The electrograms were processed by a computer and displayed as epicardial equipotential maps at 1-msec intervals. Isochrone maps also were drawn. Several new features of epicardial potential fields were identified: (1) a high number of breakthrough points; (2) the topography, apparent widths, velocities of the wavefronts and the related potential drop; (3) the topography of positive potential peaks in relation to the wavefronts. Fifteen to 24 breakthrough points were located on the anterior, and 10 to 13 on the posterior ventricular surface. Some were in previously described locations and many others in new locations. Specifically, 3 to 5 breakthrough points appeared close to the atrioventricular groove on the anterior right ventricle and 2 to 4 on the posterior heart aspect; these basal breakthrough points appeared when a large portion of ventricular surface was still unexcited. Due to the presence of numerous breakthrough points on the anterior and posterior aspect of the heart which had not previously been described, the spread of excitation on the ventricular surface was "mosaic-like," with activation wavefronts spreading in all directions, rather than radially from the two breakthrough points, as traditionally described. The positive potential peaks which lay ahead of the expanding wavefronts moved along preferential directions which were probably related to the myocardial fiber direction.

  18. Coupling of morphology to surface transport in ion-beam-irradiated surfaces: normal incidence and rotating targets

    International Nuclear Information System (INIS)

    Munoz-Garcia, Javier; Cuerno, Rodolfo; Castro, Mario

    2009-01-01

    Continuum models have proved their applicability to describe nanopatterns produced by ion-beam sputtering of amorphous or amorphizable targets at low and medium energies. Here we pursue the recently introduced 'hydrodynamic approach' in the cases of bombardment at normal incidence, or of oblique incidence onto rotating targets, known to lead to self-organized arrangements of nanodots. Our approach stresses the dynamical roles of material (defect) transport at the target surface and of local redeposition. By applying results previously derived for arbitrary angles of incidence, we derive effective evolution equations for these geometries of incidence, which are then numerically studied. Moreover, we show that within our model these equations are identical (albeit with different coefficients) in both cases, provided surface tension is isotropic in the target. We thus account for the common dynamics for both types of incidence conditions, namely formation of dots with short-range order and long-wavelength disorder, and an intermediate coarsening of dot features that improves the local order of the patterns. We provide for the first time approximate analytical predictions for the dependence of stationary dot features (amplitude and wavelength) on phenomenological parameters, that improve upon previous linear estimates. Finally, our theoretical results are discussed in terms of experimental data.

  19. Parallel comparative studies on toxicity of quantum dots synthesized and surface engineered with different methods in vitro and in vivo

    Directory of Open Access Journals (Sweden)

    Liu F

    2017-07-01

    Full Text Available Fengjun Liu1,* Wen Ye1,* Jun Wang2 Fengxiang Song1 Yingsheng Cheng3 Bingbo Zhang21Department of Radiology, Shanghai Public Health Clinical Center, 2Institute of Photomedicine, Shanghai Skin Disease Hospital, The Institute for Biomedical Engineering & Nano Science, Tongji University School of Medicine, 3Department of Radiology, Shanghai Sixth People’s Hospital, Shanghai Jiao Tong University, Shanghai, China *These authors contributed equally to this work Abstract: Quantum dots (QDs have been considered to be promising probes for biosensing, bioimaging, and diagnosis. However, their toxicity issues caused by heavy metals in QDs remain to be addressed, in particular for their in vivo biomedical applications. In this study, a parallel comparative investigation in vitro and in vivo is presented to disclose the impact of synthetic methods and their following surface modifications on the toxicity of QDs. Cellular assays after exposure to QDs were conducted including cell viability assessment, DNA breakage study in a single cellular level, intracellular reactive oxygen species (ROS receptor measurement, and transmission electron microscopy to evaluate their toxicity in vitro. Mice experiments after QD administration, including analysis of hemobiological indices, pharmacokinetics, histological examination, and body weight, were further carried out to evaluate their systematic toxicity in vivo. Results show that QDs fabricated by the thermal decomposition approach in organic phase and encapsulated by an amphiphilic polymer (denoted as QDs-1 present the least toxicity in acute damage, compared with those of QDs surface engineered by glutathione-mediated ligand exchange (denoted as QDs-2, and the ones prepared by coprecipitation approach in aqueous phase with mercaptopropionic acid capped (denoted as QDs-3. With the extension of the investigation time of mice respectively injected with QDs, we found that the damage caused by QDs to the organs can be

  20. Surface flatness measurement of quasi-parallel plates employing three-beam interference with strong reference beam

    Science.gov (United States)

    Sunderland, Zofia; Patorski, Krzysztof

    2016-12-01

    A big challenge for standard interferogram analysis methods such as Temporal Phase Shifting or Fourier Transform is a parasitic set of fringes which might occur in the analyzed fringe pattern intensity distribution. It is encountered, for example, when transparent glass plates with quasi-parallel surfaces are tested in Fizeau or Twyman-Green interferometers. Besides the beams reflected from the plate front surface and the interferometer reference the beam reflected from the plate rear surface also plays important role; its amplitude is comparable with the amplitude of other beams. In result we face three families of fringes of high contrast which cannot be easily separated. Earlier we proposed a competitive solution for flatness measurements which relies on eliminating one of those fringe sets from the three-beam interferogram and separating two remaining ones with the use of 2D Continuous Wavelet Transform. In this work we cover the case when the intensity of the reference beam is significantly higher than the intensities of two object beams. The main advantage of differentiating beam intensities is the change in contrast of individual fringe families. Processing of such three-beam interferograms is modified but also takes advantage of 2D CWT. We show how to implement this method in Twyman-Green and Fizeau setups and compare this processing path and measurement procedures with previously proposed solutions.

  1. Surface-modified CMOS IC electrochemical sensor array targeting single chromaffin cells for highly parallel amperometry measurements.

    Science.gov (United States)

    Huang, Meng; Delacruz, Joannalyn B; Ruelas, John C; Rathore, Shailendra S; Lindau, Manfred

    2018-01-01

    Amperometry is a powerful method to record quantal release events from chromaffin cells and is widely used to assess how specific drugs modify quantal size, kinetics of release, and early fusion pore properties. Surface-modified CMOS-based electrochemical sensor arrays allow simultaneous recordings from multiple cells. A reliable, low-cost technique is presented here for efficient targeting of single cells specifically to the electrode sites. An SU-8 microwell structure is patterned on the chip surface to provide insulation for the circuitry as well as cell trapping at the electrode sites. A shifted electrode design is also incorporated to increase the flexibility of the dimension and shape of the microwells. The sensitivity of the electrodes is validated by a dopamine injection experiment. Microwells with dimensions slightly larger than the cells to be trapped ensure excellent single-cell targeting efficiency, increasing the reliability and efficiency for on-chip single-cell amperometry measurements. The surface-modified device was validated with parallel recordings of live chromaffin cells trapped in the microwells. Rapid amperometric spikes with no diffusional broadening were observed, indicating that the trapped and recorded cells were in very close contact with the electrodes. The live cell recording confirms in a single experiment that spike parameters vary significantly from cell to cell but the large number of cells recorded simultaneously provides the statistical significance.

  2. The normalization of surface anisotropy effects present in SEVIRI reflectances by using the MODIS BRDF method

    DEFF Research Database (Denmark)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI...... acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008....... It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI...

  3. Characterizing the adhesion of motile and nonmotile Escherichia coli to a glass surface using a parallel-plate flow chamber.

    Science.gov (United States)

    McClaine, Jennifer W; Ford, Roseanne M

    2002-04-20

    A parallel-plate flow chamber was used to measure the attachment and detachment rates of Escherichia coli to a glass surface at various fluid velocities. The effect of flagella on adhesion was investigated by performing experiments with several E. coli strains: AW405 (motile); HCB136 (nonmotile mutant with paralyzed flagella); and HCB137 (nonmotile mutant without flagella). We compared the total attachment rates and the fraction of bacteria retained on the surface to determine how the presence and movement of the flagella influence transport to the surface and adhesion strength in this dynamic system. At the lower fluid velocities, there was no significant difference in the total attachment rates for the three bacterial strains; nonmotile strains settled at a rate that was of the same order of magnitude as the diffusion rate of the motile strain. At the highest fluid velocity, the effect of settling was minimized to better illustrate the importance of motility, and the attachment rates of both nonmotile strains were approximately five times slower than that of the motile bacteria. Thus, different processes controlled the attachment rate depending on the parameter regime in which the experiment was performed. The fractions of motile bacteria retained on the glass surface increased with increasing velocity, whereas the opposite trend was found for the nonmotile strains. This suggests that the rotation of the flagella enables cells to detach from the surface (at the lower fluid velocities) and strengthens adhesion (at higher fluid velocities), whereas nonmotile cells detach as a result of shear. There was no significant difference in the initial attachment rates of the two nonmotile species, which suggests that merely the presence of flagella was not important in this stage of biofilm development. Copyright 2002 Wiley Periodicals, Inc.

  4. Investigation of a cable-driven parallel mechanism for interaction with a variety of surface, applied to the cleaning of free-form buildings

    NARCIS (Netherlands)

    Voss, K.H.J.; van der Wijk, V.; Herder, Justus Laurens; Lenarcic, Jadran; Husty, Manfred

    2012-01-01

    In this paper, the capability of a specific cable-driven parallel mechanism to interact with a variety of surfaces is investigated. This capability could be of use in for example the cleaning of large building surfaces. A method is presented to investigate the workspace for which the cables do not

  5. Controlled parallel crystallization of lithium disilicate and diopside using a combination of internal and surface nucleation

    Directory of Open Access Journals (Sweden)

    Markus Rampf

    2016-10-01

    Full Text Available In the mid-19th century, Dr. Donald Stookey identified the importance and usability of nucleating agents and mechanisms for the development of glass-ceramic materials. Today, a number of various internal and surface mechanisms as well as combinations thereof have been established in the production of glass-ceramic materials. In order to create new innovative material properties the present study focuses on the precipitation of CaMgSiO6 as a minor phase in Li2Si2O5 based glass-ceramics. In the base glass of the SiO2-Li2O-P2O5-Al2O3-K2O-MgO-CaO system P2O5 serves as nucleating agent for the internal precipitation of Li2Si2O5 crystals while a mechanical activation of the glass surface by means of ball milling is necessary to nucleate the minor CaMgSi2O6 crystal phase. For a successful precipitation of CaMgSi2O6 a minimum ratio of MgO and CaO in the range between 1.4 mol% and 2.9 mol% in the base glasses was determined. The nucleation and crystallization of both crystal phases takes place during sintering a powder compact. Dependent on the quality of the sintering process the dense Li2Si2O5-CaMgSi2O6 glass-ceramics show a mean biaxial strength of up to 392 ± 98 MPa. The microstructure of the glass-ceramics is formed by large (5-10 µm bar like CaMgSi2O6 crystals randomly embedded in a matrix of small (≤ 0.5 µm plate like Li2Si2O5 crystals arranged in an interlocking manner. While there is no significant influence of the minor CaMgSi2O6 phase on the strength of the material, the translucency of the material decreases upon precipitation of the minor phase.

  6. Cell surface glycopeptides from human intestinal epithelial cell lines derived from normal colon and colon adenocarcinomas

    International Nuclear Information System (INIS)

    Youakim, A.; Herscovics, A.

    1985-01-01

    The cell surface glycopeptides from an epithelial cell line (CCL 239) derived from normal human colon were compared with those from three cell lines (HCT-8R, HCT-15, and CaCo-2) derived independently from human colonic adenocarcinomas. Cells were incubated with D-[2- 3 H]mannose or L-[5,6- 3 H]fucose for 24 h and treated with trypsin to release cell surface components which were then digested exhaustively with Pronase and fractionated on Bio-Gel P-6 before and after treatment with endo-beta-N-acetylglucosaminidase H. The most noticeable difference between the labeled glycopeptides from the tumor and CCL 239 cells was the presence in the former of an endo-beta-N-acetylglucosaminidase H-resistant high molecular weight glycopeptide fraction which was eluted in the void volume of Bio-Gel P-6. This fraction was obtained with both labeled mannose and fucose as precursors. However, acid hydrolysis of this fraction obtained after incubation with [2- 3 H]mannose revealed that as much as 60-90% of the radioactivity was recovered as fucose. Analysis of the total glycopeptides (cell surface and cell pellet) obtained after incubation with [2- 3 H]mannose showed that from 40-45% of the radioactivity in the tumor cells and less than 10% of the radioactivity in the CCL 239 cells was recovered as fucose. After incubation of the HCT-8R cells with D-[1,6- 3 H]glucosamine and L-[1- 14 C]fucose, strong acid hydrolysis of the labeled glycopeptide fraction excluded from Bio-Gel P-6 produced 3 H-labeled N-acetylglucosamine and N-acetylgalactosamine

  7. Cell-surface glycoproteins of human sarcomas: differential expression in normal and malignant tissues and cultured cells

    International Nuclear Information System (INIS)

    Rettig, W.F.; Garin-Chesa, P.; Beresford, H.R.; Oettgen, H.F.; Melamed, M.R.; Old, L.J.

    1988-01-01

    Normal differentiation and malignant transformation of human cells are characterized by specific changes in surface antigen phenotype. In the present study, the authors have defined six cell-surface antigens of human sarcomas and normal mesenchymal cells, by using mixed hemadsorption assays and immunochemical methods for the analysis of cultured cells and immunohistochemical staining for the analysis of normal tissues and > 200 tumor specimens. Differential patterns of F19, F24, G171, G253, S5, and Thy-1 antigen expression were found to characterize (i) subsets of cultured sarcoma cell lines, (ii) cultured fibroblasts derived from various organs, (iii) normal resting and activated mesenchymal tissues, and (iv) sarcoma and nonmesenchymal tumor tissues. These results provide a basic surface antigenic map for cultured mesenchymal cells and mesenchymal tissues and permit the classification of human sarcomas according to their antigenic phenotypes

  8. A fast and efficient adaptive parallel ray tracing based model for thermally coupled surface radiation in casting and heat treatment processes

    International Nuclear Information System (INIS)

    Fainberg, J; Schaefer, W

    2015-01-01

    A new algorithm for heat exchange between thermally coupled diffusely radiating interfaces is presented, which can be applied for closed and half open transparent radiating cavities. Interfaces between opaque and transparent materials are automatically detected and subdivided into elementary radiation surfaces named tiles. Contrary to the classical view factor method, the fixed unit sphere area subdivision oriented along the normal tile direction is projected onto the surrounding radiation mesh and not vice versa. Then, the total incident radiating flux of the receiver is approximated as a direct sum of radiation intensities of representative “senders” with the same weight factor. A hierarchical scheme for the space angle subdivision is selected in order to minimize the total memory and the computational demands during thermal calculations. Direct visibility is tested by means of a voxel-based ray tracing method accelerated by means of the anisotropic Chebyshev distance method, which reuses the computational grid as a Chebyshev one. The ray tracing algorithm is fully parallelized using MPI and takes advantage of the balanced distribution of all available tiles among all CPU's. This approach allows tracing of each particular ray without any communication. The algorithm has been implemented in a commercial casting process simulation software. The accuracy and computational performance of the new radiation model for heat treatment, investment and ingot casting applications is illustrated using industrial examples. (paper)

  9. Research surface resistance of copper normal and abnormal skin-effects depending on the frequency of electromagnetic field

    International Nuclear Information System (INIS)

    Kutovyi, V.A.; Komir, A.I.

    2013-01-01

    The results of the frequency dependence of surface resistance of copper in diffuse and specular reflection of electrons from the conductive surface of the high-frequency resonance of the system depending on the frequency of the electromagnetic field in the normal and anomalous skin effect. Found, the surface resistance of copper is reduced by more than 10 times at the temperature of liquid helium, as compared with a surface resistivity at room temperature, at frequencies f ≤ 173 MHz, for diffuse reflection of conduction electrons from the surface of the conductive layer, and the specular reflection - at frequencies f ≤ 346 MHz

  10. Development of novel series and parallel sensing system based on nanostructured surface enhanced Raman scattering substrate for biomedical application

    Science.gov (United States)

    Chang, Te-Wei

    With the advance of nanofabrication, the capability of nanoscale metallic structure fabrication opens a whole new study in nanoplasmonics, which is defined as the investigation of photon-electron interaction in the vicinity of nanoscale metallic structures. The strong oscillation of free electrons at the interface between metal and surrounding dielectric material caused by propagating surface plasmon resonance (SPR) or localized surface plasmon resonance (LSPR) enables a variety of new applications in different areas, especially biological sensing techniques. One of the promising biological sensing applications by surface resonance polariton is surface enhanced Raman spectroscopy (SERS), which significantly reinforces the feeble signal of traditional Raman scattering by at least 104 times. It enables highly sensitive and precise molecule identification with the assistance of a SERS substrate. Until now, the design of new SERS substrate fabrication process is still thriving since no dominant design has emerged yet. The ideal process should be able to achieve both a high sensitivity and low cost device in a simple and reliable way. In this thesis two promising approaches for fabricating nanostructured SERS substrate are proposed: thermal dewetting technique and nanoimprint replica technique. These two techniques are demonstrated to show the capability of fabricating high performance SERS substrate in a reliable and cost efficient fashion. In addition, these two techniques have their own unique characteristics and can be integrated with other sensing techniques to build a serial or parallel sensing system. The breakthrough of a combination system with different sensing techniques overcomes the inherent limitations of SERS detection and leverages it to a whole new level of systematic sensing. The development of a sensing platform based on thermal dewetting technique is covered as the first half of this thesis. The process optimization, selection of substrate material

  11. A study on fungal flora of the normal eye surface in Iranian native cattle

    Directory of Open Access Journals (Sweden)

    tohid nouri

    2014-11-01

    Full Text Available The microflora of the normal ocular surface is one of the sources supplying fungal agents for keratomycosis. This study was conducted to identify fungal isolates of the conjunctiva in clinically healthy Iranian native cattle in Urmia district. Swabs were taken from both eyes of cattle (n=45 and cultured onto Sabouraud dextrose agar with chloramphenicol and malt extract agar. Plates were incubated at 25°C and examined for 7 days. Data were analyzed for the effect of age and sex by fisher’s exact test. Thirteen cattle (28.89% were found to be positive for fungal growth. The isolated fungal genera were Aspergillus spp-7 cases (53.84%, Penicillium spp-6 cases (46.15%, Rhodotorula sp-1 case (7.69% and Candida sp-1 case (7.69%. Yeast genera represented 13.3% of all the isolates. Sex and age of cattle had no significant effect on prevalence of isolates. Incidence of fungal colonization of the eyes compared with similar studies was low which may reflect differences in season and technique of sampling. Unexpected high frequency of Aspergillus may be due to geographic differences.

  12. The Normalization of Surface Anisotropy Effects Present in SEVIRI Reflectances by Using the MODIS BRDF Method

    Science.gov (United States)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal; Fensholt, Rasmus; Rasmussen, Mads Olander; Shisanya, Chris; Mutero, Wycliffe; Mbow, Cheikh; Anyamba, Assaf; Pak, Ed; hide

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellites. We present early and provisional daily nadir BRDFadjusted reflectance (NBAR) data in the visible and near-infrared MSG channels. These utilize the high temporal resolution of MSG to produce BRDF retrievals with a greatly reduced acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008. It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI observations-primarily because of an insufficient spread of angular data due to the fixed sensor location or localized cloud contamination.

  13. The Parallel SBAS-DInSAR algorithm: an effective and scalable tool for Earth's surface displacement retrieval

    Science.gov (United States)

    Zinno, Ivana; De Luca, Claudio; Elefante, Stefano; Imperatore, Pasquale; Manunta, Michele; Casu, Francesco

    2014-05-01

    been carried out on real data acquired by ENVISAT and COSMO-SkyMed sensors. Moreover, the P-SBAS performances with respect to the size of the input dataset will also be investigated. This kind of analysis is essential for assessing the goodness of the P-SBAS algorithm and gaining insight into its applicability to different scenarios. Besides, such results will also become crucial to identify and evaluate how to appropriately exploit P-SBAS to process the forthcoming large Sentinel-1 data stream. References [1] Massonnet, D., Briole, P., Arnaud, A., "Deflation of Mount Etna monitored by Spaceborne Radar Interferometry", Nature, vol. 375, pp. 567-570, 1995. [2] Berardino, P., G. Fornaro, R. Lanari, and E. Sansosti, "A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms", IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002. [3] Elefante, S., Imperatore, P. , Zinno, I., M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, F. Casu, "SBAS-DINSAR Time series generation on cloud computing platforms", IEEE IGARSS 2013, July 2013, Melbourne (AU). [4] Zinno, P. Imperatore, S. Elefante, F. Casu, M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, "A Novel Parallel Computational Framework for Processing Large INSAR Data Sets", Living Planet Symposium 2013, Sept. 9-13, 2013.

  14. Attractor hopping between polarization dynamical states in a vertical-cavity surface-emitting laser subject to parallel optical injection

    Science.gov (United States)

    Denis-le Coarer, Florian; Quirce, Ana; Valle, Angel; Pesquera, Luis; Rodríguez, Miguel A.; Panajotov, Krassimir; Sciamanna, Marc

    2018-03-01

    We present experimental and theoretical results of noise-induced attractor hopping between dynamical states found in a single transverse mode vertical-cavity surface-emitting laser (VCSEL) subject to parallel optical injection. These transitions involve dynamical states with different polarizations of the light emitted by the VCSEL. We report an experimental map identifying, in the injected power-frequency detuning plane, regions where attractor hopping between two, or even three, different states occur. The transition between these behaviors is characterized by using residence time distributions. We find multistability regions that are characterized by heavy-tailed residence time distributions. These distributions are characterized by a -1.83 ±0.17 power law. Between these regions we find coherence enhancement of noise-induced attractor hopping in which transitions between states occur regularly. Simulation results show that frequency detuning variations and spontaneous emission noise play a role in causing switching between attractors. We also find attractor hopping between chaotic states with different polarization properties. In this case, simulation results show that spontaneous emission noise inherent to the VCSEL is enough to induce this hopping.

  15. Improved performance of parallel surface/packed-bed discharge reactor for indoor VOCs decomposition: optimization of the reactor structure

    International Nuclear Information System (INIS)

    Jiang, Nan; Hui, Chun-Xue; Li, Jie; Lu, Na; Shang, Ke-Feng; Wu, Yan; Mizuno, Akira

    2015-01-01

    The purpose of this paper is to develop a high-efficiency air-cleaning system for volatile organic compounds (VOCs) existing in the workshop of a chemical factory. A novel parallel surface/packed-bed discharge (PSPBD) reactor, which utilized a combination of surface discharge (SD) plasma with packed-bed discharge (PBD) plasma, was designed and employed for VOCs removal in a closed vessel. In order to optimize the structure of the PSPBD reactor, the discharge characteristic, benzene removal efficiency, and energy yield were compared for different discharge lengths, quartz tube diameters, shapes of external high-voltage electrode, packed-bed discharge gaps, and packing pellet sizes, respectively. In the circulation test, 52.8% of benzene was removed and the energy yield achieved 0.79 mg kJ −1 after a 210 min discharge treatment in the PSPBD reactor, which was 10.3% and 0.18 mg kJ −1 higher, respectively, than in the SD reactor, 21.8% and 0.34 mg kJ −1 higher, respectively, than in the PBD reactor at 53 J l −1 . The improved performance in benzene removal and energy yield can be attributed to the plasma chemistry effect of the sequential processing in the PSPBD reactor. The VOCs mineralization and organic intermediates generated during discharge treatment were followed by CO x selectivity and FT-IR analyses. The experimental results indicate that the PSPBD plasma process is an effective and energy-efficient approach for VOCs removal in an indoor environment. (paper)

  16. Study of MPI based on parallel MOM on PC clusters for EM-beam scattering by 2-D PEC rough surfaces

    International Nuclear Information System (INIS)

    Jun, Ma; Li-Xin, Guo; An-Qi, Wang

    2009-01-01

    This paper firstly applies the finite impulse response filter (FIR) theory combined with the fast Fourier transform (FFT) method to generate two-dimensional Gaussian rough surface. Using the electric field integral equation (EFIE), it introduces the method of moment (MOM) with RWG vector basis function and Galerkin's method to investigate the electromagnetic beam scattering by a two-dimensional PEC Gaussian rough surface on personal computer (PC) clusters. The details of the parallel conjugate gradient method (CGM) for solving the matrix equation are also presented and the numerical simulations are obtained through the message passing interface (MPI) platform on the PC clusters. It finds significantly that the parallel MOM supplies a novel technique for solving a two-dimensional rough surface electromagnetic-scattering problem. The influences of the root-mean-square height, the correlation length and the polarization on the beam scattering characteristics by two-dimensional PEC Gaussian rough surfaces are finally discussed. (classical areas of phenomenology)

  17. 3D modeling to characterize lamina cribrosa surface and pore geometries using in vivo images from normal and glaucomatous eyes

    Science.gov (United States)

    Sredar, Nripun; Ivers, Kevin M.; Queener, Hope M.; Zouridakis, George; Porter, Jason

    2013-01-01

    En face adaptive optics scanning laser ophthalmoscope (AOSLO) images of the anterior lamina cribrosa surface (ALCS) represent a 2D projected view of a 3D laminar surface. Using spectral domain optical coherence tomography images acquired in living monkey eyes, a thin plate spline was used to model the ALCS in 3D. The 2D AOSLO images were registered and projected onto the 3D surface that was then tessellated into a triangular mesh to characterize differences in pore geometry between 2D and 3D images. Following 3D transformation of the anterior laminar surface in 11 normal eyes, mean pore area increased by 5.1 ± 2.0% with a minimal change in pore elongation (mean change = 0.0 ± 0.2%). These small changes were due to the relatively flat laminar surfaces inherent in normal eyes (mean radius of curvature = 3.0 ± 0.5 mm). The mean increase in pore area was larger following 3D transformation in 4 glaucomatous eyes (16.2 ± 6.0%) due to their more steeply curved laminar surfaces (mean radius of curvature = 1.3 ± 0.1 mm), while the change in pore elongation was comparable to that in normal eyes (−0.2 ± 2.0%). This 3D transformation and tessellation method can be used to better characterize and track 3D changes in laminar pore and surface geometries in glaucoma. PMID:23847739

  18. Detachment of colloidal particles from collector surfaces with different electrostatic charge and hydrophobicity by attachment to air bubbles in a parallel plate flow chamber

    NARCIS (Netherlands)

    Suarez, CG; van der Mei, HC; Busscher, HJ

    1999-01-01

    The detachment of polystyrene particles adhering to collector surfaces with different electrostatic charge and hydrophobicity by attachment to a passing air bubble has been studied in a parallel plate flow chamber. Particle detachment decreased linearly with increasing air bubble velocity and

  19. Air bubble-induced detachment of positively and negatively charged polystyrene particles from collector surfaces in a parallel-plate flow chamber

    NARCIS (Netherlands)

    Gomez-Suarez, C; Van der Mei, HC; Busscher, HJ

    2000-01-01

    Electrostatic interactions between colloidal particles and collector surfaces were found tcr be important in particle detachment as induced by the passage of air bubbles in a parallel-plate Row chamber. Electrostatic interactions between adhering particles and passing air bubbles, however, a-ere

  20. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  1. Hydrogen-enriched non-premixed jet flames : analysis of the flame surface, flame normal, flame index and Wobbe index

    NARCIS (Netherlands)

    Ranga Dinesh, K.K.J.; Jiang, X.; Oijen, van J.A.

    2014-01-01

    A non-premixed impinging jet flame is studied using three-dimensional direct numerical simulation with detailed chemical kinetics in order to investigate the influence of fuel variability on flame surface, flame normal, flame index and Wobbe index for hydrogen-enriched combustion. Analyses indicate

  2. Resonant and kinematical enhancement of He scattering from LiF(001) surface and pseudosurface vibrational normal modes

    International Nuclear Information System (INIS)

    Nichols, W.L.; Weare, J.H.

    1986-01-01

    One-phonon cross sections calculated from sagittally polarized vibrational normal modes account for most salient inelastic-scattering intensities seen in He-LiF(001) and measurements published by Brusdeylins, Doak, and Toennies. We have found that most inelastic intensities which cannot be attributed to potential resonances can be explained as kinematically enhanced scattering from both surface and pseudosurface bulk modes

  3. Surface morphology of active normal faults in hard rock: Implications for the mechanics of the Asal Rift, Djibouti

    Science.gov (United States)

    Pinzuti, Paul; Mignan, Arnaud; King, Geoffrey C. P.

    2010-10-01

    Tectonic-stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localised magma intrusion, with normal faults accommodating extension and subsidence only above the maximum reach of the magma column. In these magmatic rifting models, or so-called magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Vertical profiles of normal fault scarps from levelling campaign in the Asal Rift, where normal faults seem sub-vertical at surface level, have been analysed to discuss the creation and evolution of normal faults in massive fractured rocks (basalt lava flows), using mechanical and kinematics concepts. We show that the studied normal fault planes actually have an average dip ranging between 45° and 65° and are characterised by an irregular stepped form. We suggest that these normal fault scarps correspond to sub-vertical en echelon structures, and that, at greater depth, these scarps combine and give birth to dipping normal faults. The results of our analysis are compatible with the magmatic intrusion models instead of tectonic-stretching models. The geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.

  4. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  5. Evolution of normal stress and surface roughness in buckled thin films

    NARCIS (Netherlands)

    Palasantzas, G; De Hosson, JTM

    2003-01-01

    In this work we investigate buckling of compressed elastic thin films, which are bonded onto a viscous layer of finite thickness. It is found that the normal stress exerted by the viscous layer on the elastic film evolves with time showing a minimum at early buckling stages, while it increases at

  6. A detailed chemistry model for transient hydrogen and carbon monoxide catalytic recombination on parallel flat Pt surfaces implemented in an integral code

    International Nuclear Information System (INIS)

    Jimenez, Miguel A.; Martin-Valdepenas, Juan M.; Martin-Fuertes, Francisco; Fernandez, Jose A.

    2007-01-01

    A detailed chemistry model has been adapted and developed for surface chemistry, heat and mass transfer between H 2 /CO/air/steam/CO 2 mixtures and vertical parallel Pt-coated surfaces. This model is based onto a simplified Deutschmann reaction scheme for methane surface combustion and the analysis by Elenbaas for buoyancy-induced heat transfer between parallel plates. Mass transfer is treated by the heat and mass transfer analogy. The proposed model is able to simulate the H 2 /CO recombination phenomena characteristic of parallel-plate Passive Autocatalytic Recombiners (PARs), which have been proposed and implemented as a promising hydrogen-control strategy in the safety of nuclear power stations or other industries. The transient model is able to approach the warm-up phase of the PAR and its shut-down as well as the dynamic changes within the surrounding atmosphere. The model has been implemented within the MELCOR code and assessed against results of the Battelle Model Containment tests of the Zx series. Results show accurate predictions and a better performance than traditional methods in integral codes, i.e. empirical correlations, which are also much case-specific. Influence of CO present in the mixture on the PAR performance is also addressed in this paper

  7. Normalization in quantitative [18F]FDG PET imaging: the 'body surface area' may be a volume

    International Nuclear Information System (INIS)

    Laffon, Eric; Suarez, Kleydis; Berthoumieu, Yannick; Ducassou, Dominique; Marthan, Roger

    2006-01-01

    Non-invasive methods for quantifying [ 18 F]FDG uptake in tumours often require normalization to either body weight or body surface area (BSA), as a surrogate for [ 18 F]FDG distribution volume (DV). Whereas three dimensions are involved in DV and weight (assuming that weight is proportional to volume), only two dimensions are obviously involved in BSA. However, a fractal geometry interpretation, related to an allometric scaling, suggests that the so-called 'body surface area' may stand for DV. (note)

  8. Formation times of RbHe exciplexes on the surface of superfluid versus normal fluid helium nanodroplets

    International Nuclear Information System (INIS)

    Droppelmann, G.; Buenermann, O.; Stienkemeier, F.; Schulz, C.P.

    2004-01-01

    Nanodroplets of either superfluid He 4 or normal fluid He 3 are doped with Rb atoms that are bound to the surface of the droplets. The formation of RbHe exciplexes upon 5P 3/2 excitation is monitored in real time by femtosecond pump-probe techniques. We find formation times of 8.5 and 11.6 ps for Rb He 4 and Rb He 3 , respectively. A comparison to calculations based on a tunneling model introduced for these systems by Reho et al. [J. Chem. Phys. 113, 9694 (2000)] shows that the proposed mechanism cannot account for our findings. Apparently, a different relaxation dynamics of the superfluid opposed to the normal fluid surface is responsible for the observed formation times

  9. DEVELOPMENT AND USE OF A PARALLEL-PLATE FLOW CHAMBER FOR STUDYING CELLULAR ADHESION TO SOLID-SURFACES

    NARCIS (Netherlands)

    VANKOOTEN, TG; SCHAKENRAAD, JM; VANDERMEI, HC; BUSSCHER, HJ

    A parallel-plate flow chamber is developed in order to study cellular adhesion phenomena. An image analysis system is used to observe individual cells exposed to flow in situ and to determine area, perimeter, and shape of these cells as a function of time and shear stress. With this flow system the

  10. Surface profiling of normally responding and nonreleasing basophils by flow cytometry

    DEFF Research Database (Denmark)

    Kistrup, Kasper; Poulsen, Lars Kærgaard; Jensen, Bettina Margrethe

    a maximum release blood mononuclear cells were purified by density centrifugation and using flow cytometry, basophils, defined as FceRIa+CD3-CD14-CD19-CD56-,were analysed for surface expression of relevant markers. All samples were compensated and analysed in logicle display. All gates......c, C3aR, C5aR CCR3, FPR1, ST2, CRTH2 on anti-IgE respondsive and nonreleasing basophils by flow cytometry, thereby generating a surface profile of the two phenotypes. Methods Fresh buffy coat blood (

  11. Air loads on a rigid plate oscillating normal to a fixed surface

    NARCIS (Netherlands)

    Beltman, W.M.; van der Hoogt, Peter; Spiering, R.M.E.J.; Tijdeman, H.

    1997-01-01

    This paper deals with the theoretical and experimental investigation on a rigid, rectangular plate oscillating in the proximity of a fixed surface. The plate is suspended by springs. The airloads generated by the oscillating motion of the plate are determined. Due to the fact that the plate is

  12. IDENTIFYING RECENT SURFACE MINING ACTIVITIES USING A NORMALIZED DIFFERENCE VEGETATION INDEX (NDVI) CHANGE DETECTION METHOD

    Science.gov (United States)

    Coal mining is a major resource extraction activity on the Appalachian Mountains. The increased size and frequency of a specific type of surface mining, known as mountain top removal-valley fill, has in recent years raised various environmental concerns. During mountainto...

  13. Estimating Subglottal Pressure from Neck-Surface Acceleration during Normal Voice Production

    Science.gov (United States)

    Fryd, Amanda S.; Van Stan, Jarrad H.; Hillman, Robert E.; Mehta, Daryush D.

    2016-01-01

    Purpose: The purpose of this study was to evaluate the potential for estimating subglottal air pressure using a neck-surface accelerometer and to compare the accuracy of predicting subglottal air pressure relative to predicting acoustic sound pressure level (SPL). Method: Indirect estimates of subglottal pressure (P[subscript sg]') were obtained…

  14. Navier-Stokes Computations of a Wing-Flap Model With Blowing Normal to the Flap Surface

    Science.gov (United States)

    Boyd, D. Douglas, Jr.

    2005-01-01

    A computational study of a generic wing with a half span flap shows the mean flow effects of several blown flap configurations. The effort compares and contrasts the thin-layer, Reynolds averaged, Navier-Stokes solutions of a baseline wing-flap configuration with configurations that have blowing normal to the flap surface through small slits near the flap side edge. Vorticity contours reveal a dual vortex structure at the flap side edge for all cases. The dual vortex merges into a single vortex at approximately the mid-flap chord location. Upper surface blowing reduces the strength of the merged vortex and moves the vortex away from the upper edge. Lower surface blowing thickens the lower shear layer and weakens the merged vortex, but not as much as upper surface blowing. Side surface blowing forces the lower surface vortex farther outboard of the flap edge by effectively increasing the aerodynamic span of the flap. It is seen that there is no global aerodynamic penalty or benefit from the particular blowing configurations examined.

  15. Surface and protein analyses of normal human cell attachment on PIII-modified chitosan membranes

    International Nuclear Information System (INIS)

    Saranwong, N.; Inthanon, K.; Wongkham, W.; Wanichapichart, P.; Suwannakachorn, D.; Yu, L.D.

    2012-01-01

    Surface of chitosan membrane was modified with argon (Ar) and nitrogen (N) plasma immersion ion implantation (PIII) for human skin fibroblasts F1544 cell attachment. The modified surfaces were characterized by Fourier transform infrared spectroscopy (FTIR) and atomic force microscopy (AFM). Cell attachment patterns were evaluated by scanning electron microscopy (SEM). The enzyme-linked immunosorbent assay (ELISA) was used to quantify levels of focal adhesion kinase (FAK). The results showed that Ar PIII had an enhancement effect on the cell attachment while N-PIII had an inhibition effect. Filopodial analysis revealed more microfilament cytoplasmic spreading on the edge of cells attached on the Ar-treated membranes than N-treated membranes. Higher level FAK was found in Ar-treated membranes than that in N-treated membranes.

  16. Normal loads program for aerodynamic lifting surface theory. [evaluation of spanwise and chordwise loading distributions

    Science.gov (United States)

    Medan, R. T.; Ray, K. S.

    1974-01-01

    A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.

  17. How Can Polarization States of Reflected Light from Snow Surfaces Inform Us on Surface Normals and Ultimately Snow Grain Size Measurements?

    Science.gov (United States)

    Schneider, A. M.; Flanner, M.; Yang, P.; Yi, B.; Huang, X.; Feldman, D.

    2016-12-01

    The Snow Grain Size and Pollution (SGSP) algorithm is a method applied to Moderate Resolution Imaging Spectroradiometer data to estimate snow grain size from space-borne measurements. Previous studies validate and quantify potential sources of error in this method, but because it assumes flat snow surfaces, however, large scale variations in surface normals can cause biases in its estimates due to its dependence on solar and observation zenith angles. To address these variations, we apply the Monte Carlo method for photon transport using data containing the single scattering properties of different ice crystals to calculate polarization states of reflected monochromatic light at 1500nm from modeled snow surfaces. We evaluate the dependence of these polarization states on solar and observation geometry at 1500nm because multiple scattering is generally a mechanism for depolarization and the ice crystals are relatively absorptive at this wavelength. Using 1500nm thus results in a higher number of reflected photons undergoing fewer scattering events, increasing the likelihood of reflected light having higher degrees of polarization. In evaluating the validity of the model, we find agreement with previous studies pertaining to near-infrared spectral directional hemispherical reflectance (i.e. black-sky albedo) and similarities in measured bidirectional reflectance factors, but few studies exist modeling polarization states of reflected light from snow surfaces. Here, we present novel results pertaining to calculated polarization states and compare dependences on solar and observation geometry for different idealized snow surfaces. If these dependencies are consistent across different ice particle shapes and sizes, then these findings could inform the SGSP algorithm by providing useful relationships between measurable physical quantities and solar and observation geometry to better understand variations in snow surface normals from remote sensing observations.

  18. Normal emission photoelectron diffraction: a new technique for determining surface structure

    International Nuclear Information System (INIS)

    Kevan, S.D.

    1980-05-01

    One technique, photoelectron diffraction (PhD) is characterized. It has some promise in surmounting some of the problems of LEED. In PhD, the differential (angle-resolved) photoemission cross-section of a core level localized on an adsorbate atom is measured as a function of some final state parameter. The photoemission final state consists of two components, one of which propagates directly to the detector and another which scatters off the surface and then propagates to the detector. These are added coherently, and interference between the two manifests itself as cross-section oscillations which are sensitive to the local structure around the absorbing atom. We have shown that PhD deals effectively with two- and probably also three-dimensionally disordered systems. Its non-damaging and localized, atom-specific nature gives PhD a good deal of promise in dealing with molecular overlayer systems. It is concluded that while PhD will never replace LEED, it may provide useful, complementary and possibly also more accurate surface structural information

  19. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    Science.gov (United States)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  20. Comparative study of normal and branched alkane monolayer films adsorbed on a solid surface. I. Structure

    DEFF Research Database (Denmark)

    Enevoldsen, Ann Dorrit; Hansen, Flemming Yssing; Diama, A.

    2007-01-01

    their backbone and squalane has, in addition, six methyl side groups. Upon adsorption, there are significant differences as well as similarities in the behavior of these molecular films. Both molecules form ordered structures at low temperatures; however, while the melting point of the two-dimensional (2D......The structure of a monolayer film of the branched alkane squalane (C30H62) adsorbed on graphite has been studied by neutron diffraction and molecular dynamics (MD) simulations and compared with a similar study of the n-alkane tetracosane (n-C24H52). Both molecules have 24 carbon atoms along...... temperature. The neutron diffraction data show that the translational order in the squalane monolayer is significantly less than in the tetracosane monolayer. The authors' MD simulations suggest that this is caused by a distortion of the squalane molecules upon adsorption on the graphite surface. When...

  1. MR findings of facial nerve on oblique sagittal MRI using TMJ surface coil: normal vs peripheral facial nerve palsy

    International Nuclear Information System (INIS)

    Park, Yong Ok; Lee, Myeong Jun; Lee, Chang Joon; Yoo, Jeong Hyun

    2000-01-01

    To evaluate the findings of normal facial nerve, as seen on oblique sagittal MRI using a TMJ (temporomandibular joint) surface coil, and then to evaluate abnormal findings of peripheral facial nerve palsy. We retrospectively reviewed the MR findings of 20 patients with peripheral facial palsy and 50 normal facial nerves of 36 patients without facial palsy. All underwent oblique sagittal MRI using a T MJ surface coil. We analyzed the course, signal intensity, thickness, location, and degree of enhancement of the facial nerve. According to the angle made by the proximal parotid segment on the axis of the mastoid segment, course was classified as anterior angulation (obtuse and acute, or buckling), straight and posterior angulation. Among 50 normal facial nerves, 24 (48%) were straight, and 23 (46%) demonstrated anterior angulation; 34 (68%) showed iso signal intensity on T1W1. In the group of patients, course on the affected side was either straight (40%) or showed anterior angulation (55%), and signal intensity in 80% of cases was isointense. These findings were similar to those in the normal group, but in patients with post-traumatic or post-operative facial palsy, buckling, of course, appeared. In 12 of 18 facial palsy cases (66.6%) in which contrast materials were administered, a normal facial nerve of the opposite facial canal showed mild enhancement on more than one segment, but on the affected side the facial nerve showed diffuse enhancement in all 14 patients with acute facial palsy. Eleven of these (79%) showed fair or marked enhancement on more than one segment, and in 12 (86%), mild enhancement of the proximal parotid segment was noted. Four of six chronic facial palsy cases (66.6%) showed atrophy of the facial nerve. When oblique sagittal MR images are obtained using a TMJ surface coil, enhancement of the proximal parotid segment of the facial nerve and fair or marked enhancement of at least one segment within the facial canal always suggests pathology of

  2. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  3. Comparative analysis of the surface exposed proteome of two canine osteosarcoma cell lines and normal canine osteoblasts.

    Science.gov (United States)

    Milovancev, Milan; Hilgart-Martiszus, Ian; McNamara, Michael J; Goodall, Cheri P; Seguin, Bernard; Bracha, Shay; Wickramasekara, Samanthi I

    2013-06-13

    Osteosarcoma (OSA) is the most common primary bone tumor of dogs and carries a poor prognosis despite aggressive treatment. An improved understanding of the biology of OSA is critically needed to allow for development of novel diagnostic, prognostic, and therapeutic tools. The surface-exposed proteome (SEP) of a cancerous cell includes a multifarious array of proteins critical to cellular processes such as proliferation, migration, adhesion, and inter-cellular communication. The specific aim of this study was to define a SEP profile of two validated canine OSA cell lines and a normal canine osteoblast cell line utilizing a biotinylation/streptavidin system to selectively label, purify, and identify surface-exposed proteins by mass spectrometry (MS) analysis. Additionally, we sought to validate a subset of our MS-based observations via quantitative real-time PCR, Western blot and semi-quantitative immunocytochemistry. Our hypothesis was that MS would detect differences in the SEP composition between the OSA and the normal osteoblast cells. Shotgun MS identified 133 putative surface proteins when output from all samples were combined, with good consistency between biological replicates. Eleven of the MS-detected proteins underwent analysis of gene expression by PCR, all of which were actively transcribed, but varied in expression level. Western blot of whole cell lysates from all three cell lines was effective for Thrombospondin-1, CYR61 and CD44, and indicated that all three proteins were present in each cell line. Semi-quantitative immunofluorescence indicated that CD44 was expressed at much higher levels on the surface of the OSA than the normal osteoblast cell lines. The results of the present study identified numerous differences, and similarities, in the SEP of canine OSA cell lines and normal canine osteoblasts. The PCR, Western blot, and immunocytochemistry results, for the subset of proteins evaluated, were generally supportive of the mass spectrometry data

  4. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  5. Synthesis, characterization, and evaluation of a superficially porous particle with unique, elongated pore channels normal to the surface.

    Science.gov (United States)

    Wei, Ta-Chen; Mack, Anne; Chen, Wu; Liu, Jia; Dittmann, Monika; Wang, Xiaoli; Barber, William E

    2016-04-01

    In recent years, superficially porous particles (SPPs) have drawn great interest because of their special particle characteristics and improvement in separation efficiency. Superficially porous particles are currently manufactured by adding silica nanoparticles onto solid cores using either a multistep multilayer process or one-step coacervation process. The pore size is mainly controlled by the size of the silica nanoparticles and the tortuous pore channel geometry is determined by how those nanoparticles randomly aggregate. Such tortuous pore structure is also similar to that of all totally porous particles used in HPLC today. In this article, we report on the development of a next generation superficially porous particle with a unique pore structure that includes a thinner shell thickness and ordered pore channels oriented normal to the particle surface. The method of making the new superficially porous particles is a process called pseudomorphic transformation (PMT), which is a form of micelle templating. Porosity is no longer controlled by randomly aggregated nanoparticles but rather by micelles that have an ordered liquid crystal structure. The new particle possesses many advantages such as a narrower particle size distribution, thinner porous layer with high surface area and, most importantly, highly ordered, non-tortuous pore channels oriented normal to the particle surface. This PMT process has been applied to make 1.8-5.1μm SPPs with pore size controlled around 75Å and surface area around 100m(2)/g. All particles with different sizes show the same unique pore structure with tunable pore size and shell thickness. The impact of the novel pore structure on the performance of these particles is characterized by measuring van Deemter curves and constructing kinetic plots. Reduced plate heights as low as 1.0 have been achieved on conventional LC instruments. This indicates higher efficiency of such particles compared to conventional totally porous and

  6. [Experimental studies on the diffusion of excitation on the right ventricular surface in the dog, during normal and stimulated beats].

    Science.gov (United States)

    Arisi, G; Macchi, E; Baruffi, S; Musso, E; Spaggiari, S; Stilli, D; Taccardi, B

    1982-01-01

    Previous work on the spread of excitation on the dog's ventricular surface enabled us to locate up to 30 breakthrough points (BKTPs) where excitation reaches the ventricular surface. In particular the equipotential contour maps enabled us to detect 3 to 5 BKTPs on the anterior right ventricular surface, near the a-v groove when a large part of ventricular surface was still at rest. With a view to investigating the mechanism underlying the early excitation of these basal regions, we stimulated the heart at several right ventricular BKTPs and in other points located at a distance from the BKTPs. The instantaneous equipotential maps showed that after stimulation most right ventricular BKTPs remained in the same position as observed the normal beats. The early appearance of epicardial wavefronts in the basal region and generally in other areas of the right ventricle was attributed to the rapid propagation of excitation waves through the Purkinje network, probably associated to a short transmural crossing time, due to a local thinness of the ventricular wall.

  7. Inflammatory Cytokine Tumor Necrosis Factor α Confers Precancerous Phenotype in an Organoid Model of Normal Human Ovarian Surface Epithelial Cells

    Directory of Open Access Journals (Sweden)

    Joseph Kwong

    2009-06-01

    Full Text Available In this study, we established an in vitro organoid model of normal human ovarian surface epithelial (HOSE cells. The spheroids of these normal HOSE cells resembled epithelial inclusion cysts in human ovarian cortex, which are the cells of origin of ovarian epithelial tumor. Because there are strong correlations between chronic inflammation and the incidence of ovarian cancer, we used the organoid model to test whether protumor inflammatory cytokine tumor necrosis factor α would induce malignant phenotype in normal HOSE cells. Prolonged treatment of tumor necrosis factor α induced phenotypic changes of the HOSE spheroids, which exhibited the characteristics of precancerous lesions of ovarian epithelial tumors, including reinitiation of cell proliferation, structural disorganization, epithelial stratification, loss of epithelial polarity, degradation of basement membrane, cell invasion, and overexpression of ovarian cancer markers. The result of this study provides not only an evidence supporting the link between chronic inflammation and ovarian cancer formation but also a relevant and novel in vitro model for studying of early events of ovarian cancer.

  8. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  9. The relationship of chromophoric dissolved organic matter parallel factor analysis fluorescence and polycyclic aromatic hydrocarbons in natural surface waters.

    Science.gov (United States)

    Li, Sijia; Chen, Ya'nan; Zhang, Jiquan; Song, Kaishan; Mu, Guangyi; Sun, Caiyun; Ju, Hanyu; Ji, Meichen

    2018-01-01

    Polycyclic aromatic hydrocarbons (PAHs), a large group of persistent organic pollutants (POPs), have caused wide environmental pollution and ecological effects. Chromophoric dissolved organic matter (CDOM), which consists of complex compounds, was seen as a proxy of water quality. An attempt was made to understand the relationships of CDOM absorption parameters and parallel factor analysis (PARAFAC) components with PAHs under seasonal variation in the riverine, reservoir, and urban waters of the Yinma River watershed in 2016. These different types of water bodies provided wide CDOM and PAHs concentration ranges with CDOM absorption coefficients at a wavelength of 350 nm (a CDOM (350)) of 1.17-20.74 m -1 and total PAHs of 0-1829 ng/L. CDOM excitation-emission matrix (EEM) presented two fluorescent components, e.g., terrestrial humic-like (C1) and tryptophan-like (C2) were identified using PARAFAC. Tryptophan-like associated protein-like fluorescence often dominates the EEM signatures of sewage samples. Our finding is that seasonal CDOM EEM-PARAFAC and PAHs concentration showed consistent tendency indicated that PAHs were un-ignorable pollutants. However, the disparities in seasonal CDOM-PAH relationships relate to the similar sources of CDOM and PAHs, and the proportion of PAHs in CDOM. Overlooked and poorly appreciated, quantifying the relationship between CDOM and PAHs has important implications, because these results simplify ecological and health-based risk assessment of pollutants compared to the traditional chemical measurements.

  10. Articular surface approximation in equivalent spatial parallel mechanism models of the human knee joint: an experiment-based assessment.

    Science.gov (United States)

    Ottoboni, A; Parenti-Castelli, V; Sancisi, N; Belvedere, C; Leardini, A

    2010-01-01

    In-depth comprehension of human joint function requires complex mathematical models, which are particularly necessary in applications of prosthesis design and surgical planning. Kinematic models of the knee joint, based on one-degree-of-freedom equivalent mechanisms, have been proposed to replicate the passive relative motion between the femur and tibia, i.e., the joint motion in virtually unloaded conditions. In the mechanisms analysed in the present work, some fibres within the anterior and posterior cruciate and medial collateral ligaments were taken as isometric during passive motion, and articulating surfaces as rigid. The shapes of these surfaces were described with increasing anatomical accuracy, i.e. from planar to spherical and general geometry, which consequently led to models with increasing complexity. Quantitative comparison of the results obtained from three models, featuring an increasingly accurate approximation of the articulating surfaces, was performed by using experimental measurements of joint motion and anatomical structure geometries of four lower-limb specimens. Corresponding computer simulations of joint motion were obtained from the different models. The results revealed a good replication of the original experimental motion by all models, although the simulations also showed that a limit exists beyond which description of the knee passive motion does not benefit considerably from further approximation of the articular surfaces.

  11. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  12. Identification of a developmental gene expression signature, including HOX genes, for the normal human colonic crypt stem cell niche: overexpression of the signature parallels stem cell overpopulation during colon tumorigenesis.

    Science.gov (United States)

    Bhatlekar, Seema; Addya, Sankar; Salunek, Moreh; Orr, Christopher R; Surrey, Saul; McKenzie, Steven; Fields, Jeremy Z; Boman, Bruce M

    2014-01-15

    Our goal was to identify a unique gene expression signature for human colonic stem cells (SCs). Accordingly, we determined the gene expression pattern for a known SC-enriched region--the crypt bottom. Colonic crypts and isolated crypt subsections (top, middle, and bottom) were purified from fresh, normal, human, surgical specimens. We then used an innovative strategy that used two-color microarrays (∼18,500 genes) to compare gene expression in the crypt bottom with expression in the other crypt subsections (middle or top). Array results were validated by PCR and immunostaining. About 25% of genes analyzed were expressed in crypts: 88 preferentially in the bottom, 68 in the middle, and 131 in the top. Among genes upregulated in the bottom, ∼30% were classified as growth and/or developmental genes including several in the PI3 kinase pathway, a six-transmembrane protein STAMP1, and two homeobox (HOXA4, HOXD10) genes. qPCR and immunostaining validated that HOXA4 and HOXD10 are selectively expressed in the normal crypt bottom and are overexpressed in colon carcinomas (CRCs). Immunostaining showed that HOXA4 and HOXD10 are co-expressed with the SC markers CD166 and ALDH1 in cells at the normal crypt bottom, and the number of these co-expressing cells is increased in CRCs. Thus, our findings show that these two HOX genes are selectively expressed in colonic SCs and that HOX overexpression in CRCs parallels the SC overpopulation that occurs during CRC development. Our study suggests that developmental genes play key roles in the maintenance of normal SCs and crypt renewal, and contribute to the SC overpopulation that drives colon tumorigenesis.

  13. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  14. Studies on Impingement Effects of Low Density Jets on Surfaces — Determination of Shear Stress and Normal Pressure

    Science.gov (United States)

    Sathian, Sarith. P.; Kurian, Job

    2005-05-01

    This paper presents the results of the Laser Reflection Method (LRM) for the determination of shear stress due to impingement of low-density free jets on flat plate. For thin oil film moving under the action of aerodynamic boundary layer the shear stress at the air-oil interface is equal to the shear stress between the surface and air. A direct and dynamic measurement of the oil film slope is measured using a position sensing detector (PSD). The thinning rate of oil film is directly measured which is the major advantage of the LRM over LISF method. From the oil film slope history, direct calculation of the shear stress is done using a three-point formula. For the full range of experiment conditions Knudsen numbers varied till the continuum limit of the transition regime. The shear stress values for low-density flows in the transition regime are thus obtained using LRM and the measured values of shear show fair agreement with those obtained by other methods. Results of the normal pressure measurements on a flat plate in low-density jets by using thermistors as pressure sensors are also presented in the paper. The normal pressure profiles obtained show the characteristic features of Newtonian impact theory for hypersonic flows.

  15. A New Quantitative Method for the Non-Invasive Documentation of Morphological Damage in Paintings Using RTI Surface Normals

    Directory of Open Access Journals (Sweden)

    Marcello Manfredi

    2014-07-01

    Full Text Available In this paper we propose a reliable surface imaging method for the non-invasive detection of morphological changes in paintings. Usually, the evaluation and quantification of changes and defects results mostly from an optical and subjective assessment, through the comparison of the previous and subsequent state of conservation and by means of condition reports. Using quantitative Reflectance Transformation Imaging (RTI we obtain detailed information on the geometry and morphology of the painting surface with a fast, precise and non-invasive method. Accurate and quantitative measurements of deterioration were acquired after the painting experienced artificial damage. Morphological changes were documented using normal vector images while the intensity map succeeded in highlighting, quantifying and describing the physical changes. We estimate that the technique can detect a morphological damage slightly smaller than 0.3 mm, which would be difficult to detect with the eye, considering the painting size. This non-invasive tool could be very useful, for example, to examine paintings and artwork before they travel on loan or during a restoration. The method lends itself to automated analysis of large images and datasets. Quantitative RTI thus eases the transition of extending human vision into the realm of measuring change over time.

  16. Loss of surface horizon of an irrigated soil detected by radiometric images of normalized difference vegetation index.

    Science.gov (United States)

    Fabian Sallesses, Leonardo; Aparicio, Virginia Carolina; Costa, Jose Luis

    2017-04-01

    The use of the soil in the Humid Pampa of Argentina has changed since the mid-1990s from agricultural-livestock production (that included pastures with direct grazing) to a purely agricultural production. Also, in recent years the area under irrigation by central pivot has been increased to 150%. The waters used for irrigation are sodium carbonates. The combination of irrigation and rain increases the sodium absorption ratio of soil (SARs), consequently raising the clay dispersion and reducing infiltration. This implies an increased risk of soil loss. A reduction in the development of white clover crop (Trifolium repens L.) was observed at an irrigation plot during 2015 campaign. The clover was planted in order to reduce the impact of two maize (Zea mays L.) campaigns under irrigation, which had increased soil SAR and deteriorated soil structure. SPOT-5 radiometric normalized difference vegetation index (NDVI) images were used to determine two zones of high and low production. In each zone, four random points were selected for further geo-referenced field sampling. Two geo-referenced measures of effective depth and surface soil sampling were carried out in each point. Texture of soil samples was determined by Pipette Method of Sedimentation Analysis. Data exploratory analysis showed that low production zone had a media effective depth = 80 cm and silty clay loam texture, while high production zone had a media effective depth > 140 cm and silt loam texture. The texture class of the low production zone did not correspond to prior soil studies carried out by the INTA (National Institute of Agricultural Technology), which showed that those soil textures were silt loam at surface and silty clay loam at sub-surface. The loss of the A horizon is proposed as a possible explanation, but further research is required. Besides, the need of a soil cartography actualization, which integrates new satellite imaging technologies and geo-referenced measurements with soil sensors is

  17. What Are Normal Metal Ion Levels After Total Hip Arthroplasty? A Serologic Analysis of Four Bearing Surfaces.

    Science.gov (United States)

    Barlow, Brian T; Ortiz, Philippe A; Boles, John W; Lee, Yuo-Yu; Padgett, Douglas E; Westrich, Geoffrey H

    2017-05-01

    The recent experiences with adverse local tissue reactions have highlighted the need to establish what are normal serum levels of cobalt (Co), chromium (Cr), and titanium (Ti) after hip arthroplasty. Serum Co, Cr, and Ti levels were measured in 80 nonconsecutive patients with well-functioning unilateral total hip arthroplasty and compared among 4 bearing surfaces: ceramic-on-ceramic (CoC); ceramic-on-polyethylene (CoP); metal-on-polyethylene (MoP), and dual mobility (DM). The preoperative and most recent University of California, Los Angeles (UCLA) and Western Ontario and McMaster Universities Arthritis Index (WOMAC) scores were compared among the different bearing surfaces. No significant difference was found among serum Co and Cr levels between the 4 bearing surface groups (P = .0609 and P = .1577). Secondary analysis comparing metal and ceramic femoral heads demonstrated that the metal group (MoP, modular dual mobility (Stryker Orthopedics, Mahwah, NJ) [metal]) had significant higher serum Co levels compared with the ceramic group (CoC, CoP, MDM [ceramic]) (1.05 mg/L ± 1.25 vs 0.59 mg/L ± 0.24; P = .0411). Spearman coefficient identified no correlation between metal ion levels and patient-reported outcome scores. No serum metal ion level differences were found among well-functioning total hip arthroplasty with modern bearing couples. Significantly higher serum Co levels were seen when comparing metal vs ceramic femoral heads in this study and warrants further investigation. Metal ion levels did not correlate with patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Modeling and experimental study of oil/water contact angle on biomimetic micro-parallel-patterned self-cleaning surfaces of selected alloys used in water industry

    Energy Technology Data Exchange (ETDEWEB)

    Nickelsen, Simin; Moghadam, Afsaneh Dorri, E-mail: afsaneh@uwm.edu; Ferguson, J.B.; Rohatgi, Pradeep

    2015-10-30

    Graphical abstract: - Highlights: • Wetting behavior of four metallic materials as a function of surface roughness has been studied. • A model to predict the abrasive particle size and water/oil contact angles relationship is proposed. • Active wetting regime is determined in different materials using the proposed model. - Abstract: In the present study, the wetting behavior of surfaces of various common metallic materials used in the water industry including C84400 brass, commercially pure aluminum (99.0% pure), Nickle–Molybdenum alloy (Hastelloy C22), and 316 Stainless Steel prepared by mechanical abrasion and contact angles of several materials after mechanical abrasion were measured. A model to estimate roughness factor, R{sub f}, and fraction of solid/oil interface, ƒ{sub so}, for surfaces prepared by mechanical abrasion is proposed based on the assumption that abrasive particles acting on a metallic surface would result in scratches parallel to each other and each scratch would have a semi-round cross-section. The model geometrically describes the relation between sandpaper particle size and water/oil contact angle predicted by both the Wenzel and Cassie–Baxter contact type, which can then be used for comparison with experimental data to find which regime is active. Results show that brass and Hastelloy followed Cassie–Baxter behavior, aluminum followed Wenzel behavior and stainless steel exhibited a transition from Wenzel to Cassie–Baxter. Microstructural studies have also been done to rule out effects beyond the Wenzel and Cassie–Baxter theories such as size of structural details.

  19. Modeling guided wave excitation in plates with surface mounted piezoelectric elements: coupled physics and normal mode expansion

    Science.gov (United States)

    Ren, Baiyang; Lissenden, Cliff J.

    2018-04-01

    Guided waves have been extensively studied and widely used for structural health monitoring because of their large volumetric coverage and good sensitivity to defects. Effectively and preferentially exciting a desired wave mode having good sensitivity to a certain defect is of great practical importance. Piezoelectric discs and plates are the most common types of surface-mounted transducers for guided wave excitation and reception. Their geometry strongly influences the proportioning between excited modes as well as the total power of the excited modes. It is highly desirable to predominantly excite the selected mode while the total transduction power is maximized. In this work, a fully coupled multi-physics finite element analysis, which incorporates the driving circuit, the piezoelectric element and the wave guide, is combined with the normal mode expansion method to study both the mode tuning and total wave power. The excitation of circular crested waves in an aluminum plate with circular piezoelectric discs is numerically studied for different disc and adhesive thicknesses. Additionally, the excitation of plane waves in an aluminum plate, using a stripe piezoelectric element is studied both numerically and experimentally. It is difficult to achieve predominant single mode excitation as well as maximum power transmission simultaneously, especially for higher order modes. However, guidelines for designing the geometry of piezoelectric elements for optimal mode excitation are recommended.

  20. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    Science.gov (United States)

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  1. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    Directory of Open Access Journals (Sweden)

    Changjae Kim

    2016-01-01

    Full Text Available Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1 reduces the dimensions of the attribute space; (2 considers the attribute similarity and the proximity of the laser point simultaneously; and (3 works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  2. Poiseuille, thermal transpiration and Couette flows of a rarefied gas between plane parallel walls with nonuniform surface properties in the transverse direction and their reciprocity relations

    Science.gov (United States)

    Doi, Toshiyuki

    2018-04-01

    Slow flows of a rarefied gas between two plane parallel walls with nonuniform surface properties are studied based on kinetic theory. It is assumed that one wall is a diffuse reflection boundary and the other wall is a Maxwell-type boundary whose accommodation coefficient varies periodically in the direction perpendicular to the flow. The time-independent Poiseuille, thermal transpiration and Couette flows are considered. The flow behavior is numerically studied based on the linearized Bhatnagar-Gross-Krook-Welander model of the Boltzmann equation. The flow field, the mass and heat flow rates in the gas, and the tangential force acting on the wall surface are studied over a wide range of the gas rarefaction degree and the parameters characterizing the distribution of the accommodation coefficient. The locally convex velocity distribution is observed in Couette flow of a highly rarefied gas, similarly to Poiseuille flow and thermal transpiration. The reciprocity relations are numerically confirmed over a wide range of the flow parameters.

  3. How Parallel Are Excited State Potential Energy Surfaces from Time-Independent and Time-Dependent DFT? A BODIPY Dye Case Study.

    Science.gov (United States)

    Komoto, Keenan T; Kowalczyk, Tim

    2016-10-06

    To support the development and characterization of chromophores with targeted photophysical properties, excited-state electronic structure calculations should rapidly and accurately predict how derivatization of a chromophore will affect its excitation and emission energies. This paper examines whether a time-independent excited-state density functional theory (DFT) approach meets this need through a case study of BODIPY chromophore photophysics. A restricted open-shell Kohn-Sham (ROKS) treatment of the S 1 excited state of BODIPY dyes is contrasted with linear-response time-dependent density functional theory (TDDFT). Vertical excitation energies predicted by the two approaches are remarkably different due to overestimation by TDDFT and underestimation by ROKS relative to experiment. Overall, ROKS with a standard hybrid functional provides the more accurate description of the S 1 excited state of BODIPY dyes, but excitation energies computed by the two methods are strongly correlated. The two approaches also make similar predictions of shifts in the excitation energy upon functionalization of the chromophore. TDDFT and ROKS models of the S 1 potential energy surface are then examined in detail for a representative BODIPY dye through molecular dynamics sampling on both model surfaces. We identify the most significant differences in the sampled surfaces and analyze these differences along selected normal modes. Differences between ROKS and TDDFT descriptions of the S 1 potential energy surface for this BODIPY derivative highlight the continuing need for validation of widely used approximations in excited state DFT through experimental benchmarking and comparison to ab initio reference data.

  4. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  5. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  6. Surface Aggregation of Candida albicans on Glass in the Absence and Presence of Adhering Streptococcus gordonii in a Parallel-Plate Flow Chamber: A Surface Thermodynamical Analysis Based on Acid-Base Interactions.

    Science.gov (United States)

    Millsap; Bos; Busscher; van der Mei HC

    1999-04-15

    Adhesive interactions between yeasts and bacteria are important in the maintenance of infectious mixed biofilms on natural and biomaterial surfaces in the human body. In this study, the extended DLVO (Derjaguin-Landau-Verwey-Overbeek) approach has been applied to explain adhesive interactions between C. albicans ATCC 10261 and S. gordonii NCTC 7869 adhering on glass. Contact angles with different liquids and the zeta potentials of both the yeasts and bacteria were determined and their adhesive interactions were measured in a parallel-plate flow chamber.Streptococci were first allowed to adhere to the bottom glass plate of the flow chamber to different seeding densities, and subsequently deposition of yeasts was monitored with an image analysis system, yielding the degree of initial surface aggregation of the adhering yeasts and their spatial arrangement in a stationary end point. Irrespective of growth temperature, the yeast cells appeared uncharged in TNMC buffer, but yeasts grown at 37 degrees C were intrinsically more hydrophilic and had an increased electron-donating character than cells grown at 30 degrees C. All yeasts showed surface aggregation due to attractive Lifshitz-van der Waals forces. In addition, acid-base interactions between yeasts, yeasts and the glass substratum, and yeasts and the streptococci were attractive for yeasts grown at 30 degrees C, but yeasts grown at 37 degrees C only had favorable acid-base interactions with the bacteria, explaining the positive relationship between the surface coverage of the glass by streptococci and the surface aggregation of the yeasts. Copyright 1999 Academic Press.

  7. FILMPAR: A parallel algorithm designed for the efficient and accurate computation of thin film flow on functional surfaces containing micro-structure

    Science.gov (United States)

    Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.

    2009-12-01

    FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering

  8. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  9. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  10. Morphological evolution of dissolving feldspar particles with anisotropic surface kinetics and implications for dissolution rate normalization and grain size dependence: A kinetic modeling study

    Science.gov (United States)

    Zhang, Li; Lüttge, Andreas

    2009-11-01

    With previous two-dimensional (2D) simulations based on surface-specific feldspar dissolution succeeding in relating the macroscopic feldspar kinetics to the molecular-scale surface reactions of Si and Al atoms ( Zhang and Lüttge, 2008, 2009), we extended our modeling effort to three-dimensional (3D) feldspar particle dissolution simulations. Bearing on the same theoretical basis, the 3D feldspar particle dissolution simulations have verified the anisotropic surface kinetics observed in the 2D surface-specific simulations. The combined effect of saturation state, pH, and temperature on the surface kinetics anisotropy has been subsequently evaluated, found offering diverse options for morphological evolution of dissolving feldspar nanoparticles with varying grain sizes and starting shapes. Among the three primary faces on the simulated feldspar surface, the (1 0 0) face has the biggest dissolution rate across an extensively wide saturation state range and thus acquires a higher percentage of the surface area upon dissolution. The slowest dissolution occurs to either (0 0 1) or (0 1 0) faces depending on the bond energies of Si-(O)-Si ( ΦSi-O-Si/ kT) and Al-(O)-Si ( ΦAl-O-Si/ kT). When the ratio of ΦSi-O-Si/ kT to ΦAl-O-Si/ kT changes from 6:3 to 7:5, the dissolution rates of three primary faces change from the trend of (1 0 0) > (0 1 0) > (0 0 1) to the trend of (1 0 0) > (0 0 1) > (0 1 0). The rate difference between faces becomes more distinct and accordingly edge rounding becomes more significant. Feldspar nanoparticles also experience an increasing degree of edge rounding from far-from-equilibrium to close-to-equilibrium. Furthermore, we assessed the connection between the continuous morphological modification and the variation in the bulk dissolution rate during the dissolution of a single feldspar particle. Different normalization treatments equivalent to the commonly used mass, cube assumption, sphere assumption, geometric surface area, and reactive

  11. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  12. Comparison of the aerodynamics of bridge cables with helical fillets and a pattern-indented surface in normal flow

    DEFF Research Database (Denmark)

    Kleissl, Kenneth; Georgakis, Christos

    2011-01-01

    Over the last two decades, several bridge cable manufacturers have introduced surface modi-fications on the high-density polyethylene (HDPE) sheathing that is often installed for the protection of inner strands. The main goal of this is rain rivulet impedance, leading to the suppression of rain......-wind induced vibrations (RWIVs). The modifications are based on re-search undertaken predominantly in Europe and Japan, with two different systems prevailing; HDPE tubing fitted with helical surface fillets and HDPE tubing with pattern-indented sur-faces. In the US and Europe, helical fillets dominate, whilst...

  13. Normalized lift: an energy interpretation of the lift coefficient simplifies comparisons of the lifting ability of rotating and flapping surfaces.

    Directory of Open Access Journals (Sweden)

    Phillip Burgers

    Full Text Available For a century, researchers have used the standard lift coefficient C(L to evaluate the lift, L, generated by fixed wings over an area S against dynamic pressure, ½ρv(2, where v is the effective velocity of the wing. Because the lift coefficient was developed initially for fixed wings in steady flow, its application to other lifting systems requires either simplifying assumptions or complex adjustments as is the case for flapping wings and rotating cylinders.This paper interprets the standard lift coefficient of a fixed wing slightly differently, as the work exerted by the wing on the surrounding flow field (L/ρ·S, compared against the total kinetic energy required for generating said lift, ½v(2. This reinterpreted coefficient, the normalized lift, is derived from the work-energy theorem and compares the lifting capabilities of dissimilar lift systems on a similar energy footing. The normalized lift is the same as the standard lift coefficient for fixed wings, but differs for wings with more complex motions; it also accounts for such complex motions explicitly and without complex modifications or adjustments. We compare the normalized lift with the previously-reported values of lift coefficient for a rotating cylinder in Magnus effect, a bat during hovering and forward flight, and a hovering dipteran.The maximum standard lift coefficient for a fixed wing without flaps in steady flow is around 1.5, yet for a rotating cylinder it may exceed 9.0, a value that implies that a rotating cylinder generates nearly 6 times the maximum lift of a wing. The maximum normalized lift for a rotating cylinder is 1.5. We suggest that the normalized lift can be used to evaluate propellers, rotors, flapping wings of animals and micro air vehicles, and underwater thrust-generating fins in the same way the lift coefficient is currently used to evaluate fixed wings.

  14. Normalized Lift: An Energy Interpretation of the Lift Coefficient Simplifies Comparisons of the Lifting Ability of Rotating and Flapping Surfaces

    Science.gov (United States)

    Burgers, Phillip; Alexander, David E.

    2012-01-01

    For a century, researchers have used the standard lift coefficient CL to evaluate the lift, L, generated by fixed wings over an area S against dynamic pressure, ½ρv 2, where v is the effective velocity of the wing. Because the lift coefficient was developed initially for fixed wings in steady flow, its application to other lifting systems requires either simplifying assumptions or complex adjustments as is the case for flapping wings and rotating cylinders. This paper interprets the standard lift coefficient of a fixed wing slightly differently, as the work exerted by the wing on the surrounding flow field (L/ρ·S), compared against the total kinetic energy required for generating said lift, ½v2. This reinterpreted coefficient, the normalized lift, is derived from the work-energy theorem and compares the lifting capabilities of dissimilar lift systems on a similar energy footing. The normalized lift is the same as the standard lift coefficient for fixed wings, but differs for wings with more complex motions; it also accounts for such complex motions explicitly and without complex modifications or adjustments. We compare the normalized lift with the previously-reported values of lift coefficient for a rotating cylinder in Magnus effect, a bat during hovering and forward flight, and a hovering dipteran. The maximum standard lift coefficient for a fixed wing without flaps in steady flow is around 1.5, yet for a rotating cylinder it may exceed 9.0, a value that implies that a rotating cylinder generates nearly 6 times the maximum lift of a wing. The maximum normalized lift for a rotating cylinder is 1.5. We suggest that the normalized lift can be used to evaluate propellers, rotors, flapping wings of animals and micro air vehicles, and underwater thrust-generating fins in the same way the lift coefficient is currently used to evaluate fixed wings. PMID:22629326

  15. Normalized lift: an energy interpretation of the lift coefficient simplifies comparisons of the lifting ability of rotating and flapping surfaces.

    Science.gov (United States)

    Burgers, Phillip; Alexander, David E

    2012-01-01

    For a century, researchers have used the standard lift coefficient C(L) to evaluate the lift, L, generated by fixed wings over an area S against dynamic pressure, ½ρv(2), where v is the effective velocity of the wing. Because the lift coefficient was developed initially for fixed wings in steady flow, its application to other lifting systems requires either simplifying assumptions or complex adjustments as is the case for flapping wings and rotating cylinders.This paper interprets the standard lift coefficient of a fixed wing slightly differently, as the work exerted by the wing on the surrounding flow field (L/ρ·S), compared against the total kinetic energy required for generating said lift, ½v(2). This reinterpreted coefficient, the normalized lift, is derived from the work-energy theorem and compares the lifting capabilities of dissimilar lift systems on a similar energy footing. The normalized lift is the same as the standard lift coefficient for fixed wings, but differs for wings with more complex motions; it also accounts for such complex motions explicitly and without complex modifications or adjustments. We compare the normalized lift with the previously-reported values of lift coefficient for a rotating cylinder in Magnus effect, a bat during hovering and forward flight, and a hovering dipteran.The maximum standard lift coefficient for a fixed wing without flaps in steady flow is around 1.5, yet for a rotating cylinder it may exceed 9.0, a value that implies that a rotating cylinder generates nearly 6 times the maximum lift of a wing. The maximum normalized lift for a rotating cylinder is 1.5. We suggest that the normalized lift can be used to evaluate propellers, rotors, flapping wings of animals and micro air vehicles, and underwater thrust-generating fins in the same way the lift coefficient is currently used to evaluate fixed wings.

  16. A Study on the Fatigue-Fractured Surface of Normalized SS41 Steel and M.E.F. Dual Phase Steel by an X-ray Diffraction Technique

    International Nuclear Information System (INIS)

    Oh, Sae Wook; Park, Young Chul; Park, Soo Young; Kim, Deug Jin; Hue, Sun Chul

    1996-01-01

    This study verified the relationship between fracture mechanics parameters and X-ray parameters for normalized SS41 steel with homogeneous crystal structure and M.E.F. dual phase steel(martensite encapsulated islands of ferrite). The fatigue crack propagation test were carried out and X-ray diffraction technique was applied to fatigue fractured surface. The change in X-ray parameters(residual stress, half-value breadth) according to the depth of fatigue fractured surface were investigated. The depth of maximum plastic zone, W y , were determined on the basis of the distribution of the half-value breadth for normalized SS41 steel and that of the residual stress for M.E.F. dual phase steel. K max could be estimated by the measurement of W y

  17. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  18. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  19. Temperatures of the Ocular Surface, Lid, and Periorbital Regions of Sjögren's, Evaporative, and Aqueous-Deficient Dry Eyes Relative to Normals.

    Science.gov (United States)

    Abreau, Kerstin; Callan, Christine; Kottaiyan, Ranjini; Zhang, Aizhong; Yoon, Geunyoung; Aquavella, James V; Zavislan, James; Hindman, Holly B

    2016-01-01

    To compare the temperatures of the ocular surface, eyelid, and periorbital skin in normal eyes with Sjögren's syndrome (SS) eyes, evaporative dry eyes (EDE), and aqueous deficient dry eyes (ADDE). 10 eyes were analyzed in each age-matched group (normal, SS, EDE, and ADDE). A noninvasive infrared thermal camera captured two-dimensional images in three regions of interest (ROI) in each of three areas: the ocular surface, the upper eyelid, and the periorbital skin within a controlled environmental chamber. Mean temperatures in each ROI were calculated from the videos. Ocular surface time-segmented cooling rates were calculated over a 5-s blink interval. Relative to normal eyes, dry eyes had lower initial central OSTs (SS -0.71°C, EDE -0.55°C, ADDE -0.95°C, KW Peyes had the lowest initial central OST (Peyes had the lowest central lid temperature and lower periorbital temperatures (Pdry eye. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Formation of patterned arrays of Au nanoparticles on SiC surface by template confined dewetting of normal and oblique deposited nanoscale films

    Energy Technology Data Exchange (ETDEWEB)

    Ruffino, F., E-mail: francesco.ruffino@ct.infn.it; Grimaldi, M.G.

    2013-06-01

    We report on the formation of patterned arrays of Au nanoparticles (NPs) on 6H SiC surface. To this end, we exploit the thermal-induced dewetting properties of a template confined deposited nanoscale Au film. In this approach, the Au surface pattern order, on the SiC substrate, is established by a template confined deposition using a micrometric template. Then, a dewetting process of the patterned Au film is induced by thermal processes. We compare the results, about the patterns formation, obtained for normal and oblique deposited Au films. We show that the normal and oblique depositions, through the same template, originate different patterns of the Au film. As a consequence of these different starting patterns, after the thermal processes, different patterns for the arrays of NPs originating from the dewetting mechanisms are obtained. For each fixed deposition angle α, the pattern evolution is analyzed, by scanning electron microscopy, as a function of the annealing time at 1173 K (900 °C). From these analyses, quantitative evaluations on the NPs size evolution are drawn. - Highlights: • Micrometric template-confined nanoscale gold films are deposited on silicon carbide. • The dewetting process of template-confined gold films on silicon carbide is studied. • Comparison of dewetting process of normal and oblique deposited gold films is drawn. • Patterned arrays of gold nanoparticles on silicon carbide surface are produced.

  1. Formation of patterned arrays of Au nanoparticles on SiC surface by template confined dewetting of normal and oblique deposited nanoscale films

    International Nuclear Information System (INIS)

    Ruffino, F.; Grimaldi, M.G.

    2013-01-01

    We report on the formation of patterned arrays of Au nanoparticles (NPs) on 6H SiC surface. To this end, we exploit the thermal-induced dewetting properties of a template confined deposited nanoscale Au film. In this approach, the Au surface pattern order, on the SiC substrate, is established by a template confined deposition using a micrometric template. Then, a dewetting process of the patterned Au film is induced by thermal processes. We compare the results, about the patterns formation, obtained for normal and oblique deposited Au films. We show that the normal and oblique depositions, through the same template, originate different patterns of the Au film. As a consequence of these different starting patterns, after the thermal processes, different patterns for the arrays of NPs originating from the dewetting mechanisms are obtained. For each fixed deposition angle α, the pattern evolution is analyzed, by scanning electron microscopy, as a function of the annealing time at 1173 K (900 °C). From these analyses, quantitative evaluations on the NPs size evolution are drawn. - Highlights: • Micrometric template-confined nanoscale gold films are deposited on silicon carbide. • The dewetting process of template-confined gold films on silicon carbide is studied. • Comparison of dewetting process of normal and oblique deposited gold films is drawn. • Patterned arrays of gold nanoparticles on silicon carbide surface are produced

  2. Energy flow of electric dipole radiation in between parallel mirrors

    Science.gov (United States)

    Xu, Zhangjin; Arnoldus, Henk F.

    2017-11-01

    We have studied the energy flow patterns of the radiation emitted by an electric dipole located in between parallel mirrors. It appears that the field lines of the Poynting vector (the flow lines of energy) can have very intricate structures, including many singularities and vortices. The flow line patterns depend on the distance between the mirrors, the distance of the dipole to one of the mirrors and the angle of oscillation of the dipole moment with respect to the normal of the mirror surfaces. Already for the simplest case of a dipole moment oscillating perpendicular to the mirrors, singularities appear at regular intervals along the direction of propagation (parallel to the mirrors). For a parallel dipole, vortices appear in the neighbourhood of the dipole. For a dipole oscillating under a finite angle with the surface normal, the radiating tends to swirl around the dipole before travelling off parallel to the mirrors. For relatively large mirror separations, vortices appear in the pattern. When the dipole is off-centred with respect to the midway point between the mirrors, the flow line structure becomes even more complicated, with numerous vortices in the pattern, and tiny loops near the dipole. We have also investigated the locations of the vortices and singularities, and these can be found without any specific knowledge about the flow lines. This provides an independent means of studying the propagation of dipole radiation between mirrors.

  3. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  4. Normally Oriented Adhesion versus Friction Forces in Bacterial Adhesion to Polymer-Brush Functionalized Surfaces Under Fluid Flow

    NARCIS (Netherlands)

    Swartjes, Jan J. T. M.; Veeregowda, Deepak H.; van der Mei, Henny C.; Busscher, Henk J.; Sharma, Prashant K.

    2014-01-01

    Bacterial adhesion is problematic in many diverse applications. Coatings of hydrophilic polymer chains in a brush configuration reduce bacterial adhesion by orders of magnitude, but not to zero. Here, the mechanism by which polymer-brush functionalized surfaces reduce bacterial adhesion from a

  5. Tenskinmetric Evaluation of Surface Energy Changes in Adult Skin: Evidence from 834 Normal Subjects Monitored in Controlled Conditions

    Directory of Open Access Journals (Sweden)

    Camilla Dal Bosco

    2014-03-01

    Full Text Available To evaluate the influence of the skin aging critical level on the adult skin epidermal functional state, an improved analytical method based on the skin surface energetic measurement (TVS modeling was developed. Tenskinmetric measurements were carried out non-invasively in controlled conditions by contact angle method using only a water-drop as reference standard liquid. Adult skin was monitored by TVS Observatory according to a specific and controlled thermal protocol (Camianta protocol in use at the interconnected “Mamma Margherita Terme spa” of Terme Euganee. From June to November 2013, the surface free energy and the epidermal hydration level of adult skin were evaluated on arrival of 265 male and 569 female adult volunteers (51–90 years of age and when they departed 2 weeks later. Sensitive measurements were carried out at 0.1 mN/m. High test compliance was obtained (93.2% of all guests. Very interesting results are obtained. The high sensitivity and discrimination power of tenskinmetry combined with a thermal Camianta protocol demonstrate the possibility to evaluate at baseline level the surface energetic changes and the skin reactivity which occurs on adult skin.

  6. Surface faulting along the inland Itozawa normal fault (eastern Japan) and relation to the 2011 Tohoku-oki megathrust earthquake

    Science.gov (United States)

    Ferry, Matthieu; Tsutsumi, Hiroyuki; Meghraoui, Mustapha; Toda, Shinji

    2013-04-01

    The 11 March 2011 Mw 9 Tohoku-oki earthquake ruptured ~500 km length of the Japan Trench along the coast of eastern Japan and significantly impacted the stress regime within the crust. The resulting change in seismicity over the Japan mainland was exhibited by the 11 April 2011 Mw 6.6 Iwaki earthquake that ruptured the Itozawa and Yunodake faults. Trending NNW and NW, respectively, these 70-80° W-dipping faults bound the Iwaki basin of Neogene age and have been reactivated simultaneously both along 15-km-long sections. Here, we present initial results from a paleoseismic excavation performed across the Itozawa fault within the Tsunagi Valley at the northern third of the observed surface rupture. At the Tsunagi site, the rupture affects a rice paddy, which provides an ideally horizontal initial state to collect detailed and accurate measurements. The surface break is composed of a continuous 30-to-40-cm-wide purely extensional crack that separates the uplifted block from a gently dipping 1-to-2-m-wide strip affected by right-stepping en-echelon cracks and locally bounded by a ~0.1-m-high reverse scarplet. Total station across-fault topographic profiles indicate the pre-earthquake ground surface was vertically deformed by ~0.6 m while direct field examinations reveal that well-defined rice paddy limits have been left-laterally offset by ~0.1 m. The 12-m-long, 3.5-m-deep trench exposes the 30-to-40-cm-thick cultivated soil overlaying a 1-m-thick red to yellow silt unit, a 2-m-thick alluvial gravel unit and a basal 0.1-1-m-thick organic-rich silt unit. Deformation associated to the 2011 rupture illustrates down-dip movement along a near-vertical fault with a well-expressed bending moment at the surface and generalized warping. On the north wall, the intermediate gravel unit displays a deformation pattern similar to granular flow with only minor discrete faulting and no splay to be continuously followed from the main fault to the surface. On the south wall, warping

  7. Development of the apparatus for measuring magnetic properties of electrical steel sheets in arbitrary directions under compressive stress normal to their surface

    Directory of Open Access Journals (Sweden)

    Yoshitaka Maeda

    2017-05-01

    Full Text Available In designing motors, one must grasp the magnetic properties of electrical steel sheets considering actual conditions in motors. Especially important is grasping the stress dependence of magnetic power loss. This paper describes a newly developed apparatus to measure two-dimensional (2-D magnetic properties (properties under the arbitrary alternating and the rotating flux conditions of electrical steel sheets under compressive stress normal to the sheet surface. The apparatus has a 2-D magnetic excitation circuit to generate magnetic fields in arbitrary directions in the evaluation area. It also has a pressing unit to apply compressive stress normal to the sheet surface. During measurement, it is important to apply uniform stress throughout the evaluation area. Therefore, we have developed a new flux density sensor using needle probe method. It is composed of thin copper foils sputtered on electrical steel sheets. By using this sensor, the stress can be applied to the surface of the specimen without influence of this sensor. This paper described the details of newly developed apparatus with this sensor, and measurement results of iron loss by using are shown.

  8. Development of the apparatus for measuring magnetic properties of electrical steel sheets in arbitrary directions under compressive stress normal to their surface

    Science.gov (United States)

    Maeda, Yoshitaka; Urata, Shinya; Nakai, Hideo; Takeuchi, Yuuya; Yun, Kyyoul; Yanase, Shunji; Okazaki, Yasuo

    2017-05-01

    In designing motors, one must grasp the magnetic properties of electrical steel sheets considering actual conditions in motors. Especially important is grasping the stress dependence of magnetic power loss. This paper describes a newly developed apparatus to measure two-dimensional (2-D) magnetic properties (properties under the arbitrary alternating and the rotating flux conditions) of electrical steel sheets under compressive stress normal to the sheet surface. The apparatus has a 2-D magnetic excitation circuit to generate magnetic fields in arbitrary directions in the evaluation area. It also has a pressing unit to apply compressive stress normal to the sheet surface. During measurement, it is important to apply uniform stress throughout the evaluation area. Therefore, we have developed a new flux density sensor using needle probe method. It is composed of thin copper foils sputtered on electrical steel sheets. By using this sensor, the stress can be applied to the surface of the specimen without influence of this sensor. This paper described the details of newly developed apparatus with this sensor, and measurement results of iron loss by using are shown.

  9. Silver nanoparticle based surface enhanced Raman scattering spectroscopy of diabetic and normal rat pancreatic tissue under near-infrared laser excitation

    International Nuclear Information System (INIS)

    Huang, H; Shi, H; Chen, W; Yu, Y; Lin, D; Xu, Q; Feng, S; Lin, J; Huang, Z; Li, Y; Chen, R

    2013-01-01

    This paper presents the use of high spatial resolution silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) from rat pancreatic tissue to obtain biochrmical information about the tissue. A high quality SERS signal from a mixture of pancreatic tissues and silver nanoparticles can be obtained within 10 s using a Renishaw micro-Raman system. Prominent SERS bands of pancreatic tissue were assigned to known molecular vibrations, such as the vibrations of DNA bases, RNA bases, proteins and lipids. Different tissue structures of diabetic and normal rat pancreatic tissues have characteristic features in SERS spectra. This exploratory study demonstrated great potential for using SERS imaging to distinguish diabetic and normal pancreatic tissues on frozen sections without using dye labeling of functionalized binding sites. (letter)

  10. Plane parallel radiance transport for global illumination in vegetation

    Energy Technology Data Exchange (ETDEWEB)

    Max, N.; Mobley, C.; Keating, B.; Wu, E.H.

    1997-01-05

    This paper applies plane parallel radiance transport techniques to scattering from vegetation. The leaves, stems, and branches are represented as a volume density of scattering surfaces, depending only on height and the vertical component of the surface normal. Ordinary differential equations are written for the multiply scattered radiance as a function of the height above the ground, with the sky radiance and ground reflectance as boundary conditions. They are solved using a two-pass integration scheme to unify the two-point boundary conditions, and Fourier series for the dependence on the azimuthal angle. The resulting radiance distribution is used to precompute diffuse and specular `ambient` shading tables, as a function of height and surface normal, to be used in rendering, together with a z-buffer shadow algorithm for direct solar illumination.

  11. nth roots of normal contractions

    International Nuclear Information System (INIS)

    Duggal, B.P.

    1992-07-01

    Given a complex separable Hilbert space H and a contraction A on H such that A n , n≥2 some integer, is normal it is shown that if the defect operator D A = (1 - A * A) 1/2 is of the Hilbert-Schmidt class, then A is similar to a normal contraction, either A or A 2 is normal, and if A 2 is normal (but A is not) then there is a normal contraction N and a positive definite contraction P of trace class such that parallel to A - N parallel to 1 = 1/2 parallel to P + P parallel to 1 (where parallel to · parallel to 1 denotes the trace norm). If T is a compact contraction such that its characteristics function admits a scalar factor, if T = A n for some integer n≥2 and contraction A with simple eigen-values, and if both T and A satisfy a ''reductive property'', then A is a compact normal contraction. (author). 16 refs

  12. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    Science.gov (United States)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  13. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  14. Histochemical evidence for the differential surface labeling, uptake, and intracellular transport of a colloidal gold-labeled insulin complex by normal human blood cells.

    Science.gov (United States)

    Ackerman, G A; Wolken, K W

    1981-10-01

    A colloidal gold-labeled insulin-bovine serum albumin (GIA) reagent has been developed for the ultrastructural visualization of insulin binding sites on the cell surface and for tracing the pathway of intracellular insulin translocation. When applied to normal human blood cells, it was demonstrated by both visual inspection and quantitative analysis that the extent of surface labeling, as well as the rate and degree of internalization of the insulin complex, was directly related to cell type. Further, the pathway of insulin (GIA) transport via round vesicles and by tubulo-vesicles and saccules and its subsequent fate in the hemic cells was also related to cell variety. Monocytes followed by neutrophils bound the greatest amount of labeled insulin. The majority of lymphocytes bound and internalized little GIA, however, between 5-10% of the lymphocytes were found to bind considerable quantities of GIA. Erythrocytes rarely bound the labeled insulin complex, while platelets were noted to sequester large quantities of the GIA within their extracellular canalicular system. GIA uptake by the various types of leukocytic cells appeared to occur primarily by micropinocytosis and by the direct opening of cytoplasmic tubulo-vesicles and saccules onto the cell surface in regions directly underlying surface-bound GIA. Control procedures, viz., competitive inhibition of GIA labeling using an excess of unlabeled insulin in the incubation medium, preincubation of the GIA reagent with an antibody directed toward porcine insulin, and the incorporation of 125I-insulin into the GIA reagent, indicated the specificity and selectivity of the GIA histochemical procedure for the localization of insulin binding sites.

  15. Histochemical evidence for the differential surface labeling, uptake, and intracellular transport of a colloidal gold-labeled insulin complex by normal human blood cells

    International Nuclear Information System (INIS)

    Ackerman, G.A.; Wolken, K.W.

    1981-01-01

    A colloidal gold-labeled insulin-bovine serum albumin (GIA) reagent has been developed for the ultrastructural visualization of insulin binding sites on the cell surface and for tracing the pathway of intracellular insulin translocation. When applied to normal human blood cells, it was demonstrated by both visual inspection and quantitative analysis that the extent of surface labeling, as well as the rate and degree of internalization of the insulin complex, was directly related to cell type. Further, the pathway of insulin (GIA) transport via round vesicles and by tubulo-vesicles and saccules and its subsequent fate in the hemic cells was also related to cell variety. Monocytes followed by neutrophils bound the greatest amount of labeled insulin. The majority of lymphocytes bound and internalized little GIA, however, between 5-10% of the lymphocytes were found to bind considerable quantities of GIA. Erythrocytes rarely bound the labeled insulin complex, while platelets were noted to sequester large quantities of the GIA within their extracellular canalicular system. GIA uptake by the various types of leukocytic cells appeared to occur primarily by micropinocytosis and by the direct opening of cytoplasmic tubulo-vesicles and saccules onto the cell surface in regions directly underlying surface-bound GIA. Control procedures, viz., competitive inhibition of GIA labeling using an excess of unlabeled insulin in the incubation medium, preincubation of the GIA reagent with an antibody directed toward porcine insulin, and the incorporation of 125I-insulin into the GIA reagent, indicated the specificity and selectivity of the GIA histochemical procedure for the localization of insulin binding sites

  16. Influence of surface-normal ground acceleration on the initiation of the Jih-Feng-Erh-Shan landslide during the 1999 Chi-Chi, Taiwan, earthquake

    Science.gov (United States)

    Huang, C.-C.; Lee, Y.-H.; Liu, Huaibao P.; Keefer, D.K.; Jibson, R.W.

    2001-01-01

    The 1999 Chi-Chi, Taiwan, earthquake triggered numerous landslides throughout a large area in the Central Range, to the east, southeast, and south of the fault rupture. Among them are two large rock avalanches, at Tsaoling and at Jih-Feng-Erh-Shan. At Jih-Feng-Erh-Shan, the entire thickness (30-50 m) of the Miocene Changhukeng Shale over an area of 1 km2 slid down its bedding plane for a distance of about 1 km. Initial movement of the landslide was nearly purely translational. We investigate the effect of surface-normal acceleration on the initiation of the Jih-Feng-Erh-Shan landslide using a block slide model. We show that this acceleration, currently not considered by dynamic slope-stability analysis methods, significantly influences the initiation of the landslide.

  17. Growth and domain structure of YBa2Cu3Ox films on neodymium gallate substrates with deviation of surface normal from [110] NdGaO3

    International Nuclear Information System (INIS)

    Bdikin, I.K.; Mozhaev, P.B.; Ovsyannikov, G.A.; Komissinskij, F.V.; Kotelyanskij, I.M.; Raksha, E.I.

    2001-01-01

    One investigated into growth, crystalline structure and electrophysical properties of YBa 2 Cu 3 O x (YBCO) epitaxial films grown on NdGaO 3 (NGO) substrates with substrate surface normal deviation from [110] by 5-26.6 deg angle around [001] with CeO 2 epitaxial sublayer or without it. Orientation of YBCO epitaxial films grown at these substrates is shown to be governed by occurrence of symmetrically equipment directions in substrates and in CeO 2 layer, as well as, by film precipitation rate. At precipitation high rate YBCO films on CeO 2 sublayer grow in [001] orientation independently of orientation of substrate and sublayer. One determined that at increase of substrate plane deviation angle from (110) NGO twinning of one or of both twin complexes in YBCO might be suppressed [ru

  18. Constructing Fluorine-Free and Cost-Effective Superhydrophobic Surface with Normal-Alcohol-Modified Hydrophobic SiO2 Nanoparticles.

    Science.gov (United States)

    Ye, Hui; Zhu, Liqun; Li, Weiping; Liu, Huicong; Chen, Haining

    2017-01-11

    Superhydrophobic coatings have drawn much attention in recent years for their wide potential applications. However, a simple, cost-effective, and environmentally friendly approach is still lacked. Herein, a promising approach using nonhazardous chemicals was proposed, in which multiple hydrophobic functionalized silica nanoparticles (SiO 2 NPs) were first prepared as core component, through the efficient reaction between amino group containing SiO 2 NPs and the isocyanate containing hydrophobic surface modifiers synthesized by normal alcohols, followed by simply spraying onto various substrates for superhydrophobic functionalization. Furthermore, to further improve the mechanical durability, an organic-inorganic composite superhydrophobic coating was fabricated by incorporating cross-linking agent (polyisocyanate) into the mixture of hydrophobic-functionalized SiO 2 NPs and hydroxyl acrylic resin. The hybrid coating with cross-linked network structures is very stable with excellent mechanical durability, self-cleaning property and corrosion resistance.

  19. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  20. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  1. Oral associated bacterial infection in horses: studies on the normal anaerobic flora from the pharyngeal tonsillar surface and its association with lower respiratory tract and paraoral infections.

    Science.gov (United States)

    Bailey, G D; Love, D N

    1991-02-15

    Two hundred and seventy bacterial isolates were obtained from the pharyngeal tonsillar surface of 12 normal horses and 98 obligatory anaerobic bacteria were characterised. Of these, 57 isolates belonging to 7 genera (Peptostreptococcus (1); Eubacterium (9); Clostridium (6); Veillonella (6); Megasphera (1); Bacteroides (28); Fusobacterium (6)) were identified, and 16 of these were identified to species level (P. anaerobius (1); E. fossor (9); C. villosum (1); B. fragilis (1); B. tectum (2); B. heparinolyticus (2)). Three hundred and twenty isolates were obtained from 23 samples from horses with lower respiratory tract (LRT) or paraoral (PO) bacterial infections. Of the 143 bacteria selected for detailed characterisation, obligate anaerobes accounted for 100 isolates, facultative anaerobes for 42 isolates and obligate aerobes for one isolate. Phenotypic characterisation separated 99 of the isolates into 14 genera. Among the obligately anaerobic species, Gram-positive cocci including P. anaerobius comprised 25% of isolates, E. fossor 11% and other Gram-positive rods (excluding Clostridium sp.) 18% of isolates. The Gram-negative rods comprised B. fragilis 5%, B. heparinolyticus 5%, asaccharolytic pigmented Bacteroides 3% and other Bacteroides 13%, while a so-far unnamed species of Fusobacterium (7%), and Gram-negative corroding rods (3%) were isolated. Among the facultatively anaerobic isolates, S. equi subsp. zooepidemicus accounted for 31% of isolates, followed by Pasteurella spp. 19%, Escherichia coli 17%, Actinomyces spp. 9%, Streptococcus spp. 9%. Incidental facultative isolates were Enterococcus spp. 2%, Enterobacter cloaceae 2%, Actinobacillus spp. 2% and Gram-negative corroding rods 5%. On the basis of the similarities (as determined by DNA hybridization data and/or phenotypic characteristics) of some of the bacterial species (e.g. E. fossor and B. heparinolyticus) isolated from both the normal pharyngeal tonsillar surfaces and LRT and PO diseases of horses, it

  2. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  3. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  4. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  5. Comparison of Placido disc and Scheimpflug image-derived topography-guided excimer laser surface normalization combined with higher fluence CXL: the Athens Protocol, in progressive keratoconus

    Directory of Open Access Journals (Sweden)

    Kanellopoulos AJ

    2013-07-01

    Full Text Available Anastasios John Kanellopoulos,1,2 George Asimellis11Laservision.gr Eye Institute, Athens, Greece; 2New York University School of Medicine, Department of Opthalmology, NY, NY, USABackground: The purpose of this study was to compare the safety and efficacy of two alternative corneal topography data sources used in topography-guided excimer laser normalization, combined with corneal collagen cross-linking in the management of keratoconus using the Athens protocol, ie, a Placido disc imaging device and a Scheimpflug imaging device.Methods: A total of 181 consecutive patients with keratoconus who underwent the Athens protocol between 2008 and 2011 were studied preoperatively and at months 1, 3, 6, and 12 postoperatively for visual acuity, keratometry, and anterior surface corneal irregularity indices. Two groups were formed, depending on the primary source used for topoguided photoablation, ie, group A (Placido disc and group B (Scheimpflug rotating camera. One-year changes in visual acuity, keratometry, and seven anterior surface corneal irregularity indices were studied in each group.Results: Changes in visual acuity, expressed as the difference between postoperative and preoperative corrected distance visual acuity were +0.12 ± 0.20 (range +0.60 to -0.45 for group A and +0.19 ± 0.20 (range +0.75 to -0.30 for group B. In group A, K1 (flat keratometry changed from 45.202 ± 3.782 D to 43.022 ± 3.819 D, indicating a flattening of -2.18 D, and K2 (steep keratometry changed from 48.670 ± 4.066 D to 45.865 ± 4.794 D, indicating a flattening of -2.805 D. In group B, K1 (flat keratometry changed from 46.213 ± 4.082 D to 43.190 ± 4.398 D, indicating a flattening of -3.023 D, and K2 (steep keratometry changed from 50.774 ± 5.210 D to 46.380 ± 5.006 D, indicating a flattening of -4.394 D. For group A, the index of surface variance decreased to -5.07% and the index of height decentration to -26.81%. In group B, the index of surface variance

  6. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  7. An Algorithm for Parallel Sn Sweeps on Unstructured Meshes

    International Nuclear Information System (INIS)

    Pautz, Shawn D.

    2002-01-01

    A new algorithm for performing parallel S n sweeps on unstructured meshes is developed. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with 'normal' mesh partitionings, nearly linear speedups on up to 126 processors are observed. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, no severe asymptotic degradation in the parallel efficiency is observed with modest (≤100) levels of parallelism. This result is a fundamental step in the development of efficient parallel S n methods

  8. Insights into the Hendra virus NTAIL-XD complex: Evidence for a parallel organization of the helical MoRE at the XD surface stabilized by a combination of hydrophobic and polar interactions.

    Science.gov (United States)

    Erales, Jenny; Beltrandi, Matilde; Roche, Jennifer; Maté, Maria; Longhi, Sonia

    2015-08-01

    The Hendra virus is a member of the Henipavirus genus within the Paramyxoviridae family. The nucleoprotein, which consists of a structured core and of a C-terminal intrinsically disordered domain (N(TAIL)), encapsidates the viral genome within a helical nucleocapsid. N(TAIL) partly protrudes from the surface of the nucleocapsid being thus capable of interacting with the C-terminal X domain (XD) of the viral phosphoprotein. Interaction with XD implies a molecular recognition element (MoRE) that is located within N(TAIL) residues 470-490, and that undergoes α-helical folding. The MoRE has been proposed to be embedded in the hydrophobic groove delimited by helices α2 and α3 of XD, although experimental data could not discriminate between a parallel and an antiparallel orientation of the MoRE. Previous studies also showed that if the binding interface is enriched in hydrophobic residues, charged residues located close to the interface might play a role in complex formation. Here, we targeted for site directed mutagenesis two acidic and two basic residues within XD and N(TAIL). ITC studies showed that electrostatics plays a crucial role in complex formation and pointed a parallel orientation of the MoRE as more likely. Further support for a parallel orientation was afforded by SAXS studies that made use of two chimeric constructs in which XD and the MoRE were covalently linked to each other. Altogether, these studies unveiled the multiparametric nature of the interactions established within this complex and contribute to shed light onto the molecular features of protein interfaces involving intrinsically disordered regions. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  10. Normal Raman and surface enhanced Raman spectroscopic experiments with thin layer chromatography spots of essential amino acids using different laser excitation sources

    Science.gov (United States)

    István, Krisztina; Keresztury, Gábor; Szép, Andrea

    2003-06-01

    A comparative study of the feasibility and efficiency of Raman spectroscopic detection of thin layer chromatography (TLC) spots of some weak Raman scatterers (essential amino acids, namely, glycine and L-forms of alanine, serine, valine, proline, hydroxyproline, and phenylalanine) was carried out using four different visible and near-infrared (NIR) laser radiations with wavelengths of 532, 633, 785, and 1064 nm. Three types of commercial TLC plates were tested and the possibility of inducing surface enhanced Raman scattering (SERS) by means of Ag-sol was also investigated. The spectra obtained from spotted analytes adsorbed on TLC plates were of very different quality strongly depending on the excitation wavelength, the wetness of the samples, and the compounds examined. The best results were obtained with the simple silica TLC plate, and it has been established that the longest wavelength (lowest energy) NIR excitation of a Nd:YAG laser is definitely more suitable for generating normal Raman scattering of analyte spots than any of the visible radiations. Concerning SERS with application of Ag-sol to the TLC spots, 1-3 orders of magnitude enhancement was observed with wet samples, the greatest with the 532 nm radiation and gradually smaller with the longer wavelength excitations. It is shown, however, that due to severe adsorption-induced spectral distortions and increased sensitivity to microscopic inhomogeneity of the sample, none of the SERS spectra obtained with the dispersive Raman microscope operating in the visible region were superior to the best NIR normal FT-Raman spectra, as far as sample identification is concerned.

  11. Radiographic evaluation of marginal bone levels adjacent to parallel-screw cylinder machined-neck implants and rough-surfaced microthreaded implants using digitized panoramic radiographs.

    Science.gov (United States)

    Nickenig, Hans-Joachim; Wichmann, Manfred; Schlegel, Karl Andreas; Nkenke, Emeka; Eitner, Stephan

    2009-06-01

    The purpose of this split-mouth study was to compare macro- and microstructure implant surfaces at the marginal bone level during a stress-free healing period and under functional loading. From January to February 2006, 133 implants (70 rough-surfaced microthreaded implants and 63 machined-neck implants) were inserted in the mandible of 34 patients with Kennedy Class I residual dentitions and followed until February 2008. The marginal bone level was radiographically determined, using digitized panoramic radiographs, at four time points: at implant placement (baseline level), after the healing period, after 6 months of functional loading, and at the end of follow-up. The median follow-up time was 1.9 (range: 1.9-2.1) years. The machined-neck group had a mean crestal bone loss of 0.5 mm (range: 0-2.3) after the healing period, 0.8 mm after 6 months (range: 0-2.4), and 1.1 mm (range: 0-3) at the end of follow-up. The rough-surfaced microthreaded implant group had a mean bone loss of 0.1 mm (range: -0.4-2) after the healing period, 0.4 mm (range: 0-2.1) after 6 months, and 0.5 mm (range: 0-2.1) at the end of follow-up. The two implant types showed significant differences in marginal bone levels (healing period: P=0.01; end of follow-up: Pimplants showed that implants with the microthreaded design caused minimal changes in crestal bone levels during healing (stress-free) and under functional loading.

  12. Performance of iron–chromium–aluminum alloy surface coatings on Zircaloy 2 under high-temperature steam and normal BWR operating conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Weicheng; Mouche, Peter A.; Han, Xiaochun [University of Illinois, Department of Nuclear, Radiological, and Plasma Engineering, Urbana, IL 61801 (United States); Heuser, Brent J., E-mail: bheuser@illinois.edu [University of Illinois, Department of Nuclear, Radiological, and Plasma Engineering, Urbana, IL 61801 (United States); Mandapaka, Kiran K.; Was, Gary S. [University of Michigan, Department of Nuclear Engineering and Radiological Sciences, Ann Arbor, MI 48109 (United States)

    2016-03-15

    Iron-chromium-aluminum (FeCrAl) coatings deposited on Zircaloy 2 (Zy2) and yttria-stabilized zirconia (YSZ) by magnetron sputtering have been tested with respect to oxidation weight gain in high-temperature steam. In addition, autoclave testing of FeCrAl-coated Zy2 coupons under pressure-temperature-dissolved oxygen coolant conditions representative of a boiling water reactor (BWR) environment has been performed. Four different FeCrAl compositions have been tested in 700 °C steam; compositions that promote alumina formation inhibited oxidation of the underlying Zy2. Parabolic growth kinetics of alumina on FeCrAl-coated Zy2 is quantified via elemental depth profiling. Autoclave testing under normal BWR operating conditions (288 °C, 9.5 MPa with normal water chemistry) up to 20 days demonstrates observable weight gain over uncoated Zy2 simultaneously exposed to the same environment. However, no FeCrAl film degradation was observed. The 900 °C eutectic in binary Fe–Zr is addressed with the FeCrAl-YSZ system. - Graphical abstract: Weight gain normalized to total sample surface area versus time during 700 °C steam exposure for FeCrAl samples with different composition (A) and Fe/Cr/Al:62/4/34 (B). In both cases, the responses of uncoated Zry2 (Zry2-13A and Zry2-19A) are shown for comparison. This uncoated Zry2 response shows the expected pre-transition quasi-cubic kinetic behavior and eventual breakaway (linear) kinetics. Highlights: • FeCrAl coatings deposited on Zy2 have been tested with respect to oxidation in high-temperature steam. • FeCrAl compositions promoting alumina formation inhibited oxidation of Zy2 and delay weight gain. • Autoclave testing to 20 days of coated Zy2 in a simulated BWR environment demonstrates minimal weight gain and no film degradation. • The 900 °C eutectic in binary Fe-Zr is addressed with the FeCrAl-YSZ system.

  13. Peculiarity of deuterium ions interaction with tungsten surface in the condition imitating combination of normal operation with plasma disruption in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Guseva, M.I. E-mail: martyn@nfi.kiae.ru; Vasiliev, V.I.; Gureev, V.M.; Danelyan, L.S.; Khirpunov, B.I.; Korshunov, S.N.; Kulikauskas, V.S.; Martynenko, Yu.V.; Petrov, V.B.; Strunnikov, V.N.; Stolyarova, V.G.; Zatekin, V.V.; Litnovsky, A.M

    2001-03-01

    Tungsten is a candidate material for the ITER divertor. For the simulation of ITER normal operation conditions in combination with plasma disruptions samples of various types of tungsten were exposed to both steady-state and high power pulsed deuterium plasmas. Tungsten samples were first exposed in a steady-state plasma with an ion current density {approx}10{sup 21} m{sup -2} s{sup -1} up to a dose of 10{sup 25} m{sup -2} at a temperature of 770 K. The energy of deuterium ions was 150 eV. The additional exposure of the samples to 10 pulses of deuterium plasma was performed in the electrodynamical plasma accelerator with an energy flux 0.45 MJ/m{sup 2} per pulse. Samples of four types of tungsten (W-1%La{sub 2}O{sub 3}, W-13I, monocrystalline W(1 1 1) and W-10%Re) were investigated. The least destruction of the surface was observed for W(1 1 1). The concentration of retained deuterium in tungsten decreased from 2.5x10{sup 19} m{sup -2} to 1.07x10{sup 19} m{sup -2} (for W(1 1 1)) as a result of the additional pulsed plasma irradiation. Investigation of the tungsten erosion products after the high power pulsed plasma shots was also carried out.

  14. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  15. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  16. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  17. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  18. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  19. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  20. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  1. An anisotropic shear velocity model of the Earth's mantle using normal modes, body waves, surface waves and long-period waveforms

    Science.gov (United States)

    Moulik, P.; Ekström, G.

    2014-12-01

    We use normal-mode splitting functions in addition to surface wave phase anomalies, body wave traveltimes and long-period waveforms to construct a 3-D model of anisotropic shear wave velocity in the Earth's mantle. Our modelling approach inverts for mantle velocity and anisotropy as well as transition-zone discontinuity topographies, and incorporates new crustal corrections for the splitting functions that are consistent with the non-linear corrections we employ for the waveforms. Our preferred anisotropic model, S362ANI+M, is an update to the earlier model S362ANI, which did not include normal-mode splitting functions in its derivation. The new model has stronger isotropic velocity anomalies in the transition zone and slightly smaller anomalies in the lowermost mantle, as compared with S362ANI. The differences in the mid- to lowermost mantle are primarily restricted to features in the Southern Hemisphere. We compare the isotropic part of S362ANI+M with other recent global tomographic models and show that the level of agreement is higher now than in the earlier generation of models, especially in the transition zone and the lower mantle. The anisotropic part of S362ANI+M is restricted to the upper 300 km in the mantle and is similar to S362ANI. When radial anisotropy is allowed throughout the mantle, large-scale anisotropic patterns are observed in the lowermost mantle with vSV > vSH beneath Africa and South Pacific and vSH > vSV beneath several circum-Pacific regions. The transition zone exhibits localized anisotropic anomalies of ˜3 per cent vSH > vSV beneath North America and the Northwest Pacific and ˜2 per cent vSV > vSH beneath South America. However, small improvements in fits to the data on adding anisotropy at depth leave the question open on whether large-scale radial anisotropy is required in the transition zone and in the lower mantle. We demonstrate the potential of mode-splitting data in reducing the trade-offs between isotropic velocity and

  2. Lipoxin A4 stimulates calcium-activated chloride currents and increases airway surface liquid height in normal and cystic fibrosis airway epithelia.

    LENUS (Irish Health Repository)

    2012-01-01

    Cystic Fibrosis (CF) is a genetic disease characterised by a deficit in epithelial Cl(-) secretion which in the lung leads to airway dehydration and a reduced Airway Surface Liquid (ASL) height. The endogenous lipoxin LXA(4) is a member of the newly identified eicosanoids playing a key role in ending the inflammatory process. Levels of LXA(4) are reported to be decreased in the airways of patients with CF. We have previously shown that in normal human bronchial epithelial cells, LXA(4) produced a rapid and transient increase in intracellular Ca(2+). We have investigated, the effect of LXA(4) on Cl(-) secretion and the functional consequences on ASL generation in bronchial epithelial cells obtained from CF and non-CF patient biopsies and in bronchial epithelial cell lines. We found that LXA(4) stimulated a rapid intracellular Ca(2+) increase in all of the different CF bronchial epithelial cells tested. In non-CF and CF bronchial epithelia, LXA(4) stimulated whole-cell Cl(-) currents which were inhibited by NPPB (calcium-activated Cl(-) channel inhibitor), BAPTA-AM (chelator of intracellular Ca(2+)) but not by CFTRinh-172 (CFTR inhibitor). We found, using confocal imaging, that LXA(4) increased the ASL height in non-CF and in CF airway bronchial epithelia. The LXA(4) effect on ASL height was sensitive to bumetanide, an inhibitor of transepithelial Cl(-) secretion. The LXA(4) stimulation of intracellular Ca(2+), whole-cell Cl(-) currents, conductances and ASL height were inhibited by Boc-2, a specific antagonist of the ALX\\/FPR2 receptor. Our results provide, for the first time, evidence for a novel role of LXA(4) in the stimulation of intracellular Ca(2+) signalling leading to Ca(2+)-activated Cl(-) secretion and enhanced ASL height in non-CF and CF bronchial epithelia.

  3. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  4. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  5. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  6. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  7. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  8. Surface aggregation of Candida albicans on glass in the absence and presence of adhering Streptococcus gordonii in a parallel-plate flow chamber : A surface thermodynamical analysis based on acid-base interactions

    NARCIS (Netherlands)

    Millsap, KW; Busscher, HJ; van der Mei, HC; Bos, R.R.M.

    1999-01-01

    Adhesive interactions between yeasts and bacteria are important in the maintenance of infectious mixed biofilms on natural and biomaterial surfaces in the human body. In this study, the extended DLVO (Derjaguin-Landau-Verwey-Overbeek) approach has been applied to explain adhesive interactions

  9. General Rotational Surfaces in Pseudo-Euclidean 4-Space with Neutral Metric

    OpenAIRE

    Aleksieva, Yana; Milousheva, Velichka; Turgay, Nurettin Cenk

    2016-01-01

    We define general rotational surfaces of elliptic and hyperbolic type in the pseudo-Euclidean 4-space with neutral metric which are analogous to the general rotational surfaces of C. Moore in the Euclidean 4-space. We study Lorentz general rotational surfaces with plane meridian curves and give the complete classification of minimal general rotational surfaces of elliptic and hyperbolic type, general rotational surfaces with parallel normalized mean curvature vector field, flat general rotati...

  10. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  11. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  12. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  13. Malware Normalization

    OpenAIRE

    Christodorescu, Mihai; Kinder, Johannes; Jha, Somesh; Katzenbeisser, Stefan; Veith, Helmut

    2005-01-01

    Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies malware and thus prevents it from adversely affecting a host. In order to evade detection by malware detectors, malware writers use various obfuscation techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these evasion tactics. In this paper, we describe the design and implementation of a malware normalizer ...

  14. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  15. Normal accidents

    International Nuclear Information System (INIS)

    Perrow, C.

    1989-01-01

    The author has chosen numerous concrete examples to illustrate the hazardousness inherent in high-risk technologies. Starting with the TMI reactor accident in 1979, he shows that it is not only the nuclear energy sector that bears the risk of 'normal accidents', but also quite a number of other technologies and industrial sectors, or research fields. The author refers to the petrochemical industry, shipping, air traffic, large dams, mining activities, and genetic engineering, showing that due to the complexity of the systems and their manifold, rapidly interacting processes, accidents happen that cannot be thoroughly calculated, and hence are unavoidable. (orig./HP) [de

  16. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  17. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  18. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  19. Reconstructing Normality

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov

    2012-01-01

    Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...

  20. Pursuing Normality

    DEFF Research Database (Denmark)

    Madsen, Louise Sofia; Handberg, Charlotte

    2018-01-01

    implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease......BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... was to understand the reflections on cancer survivorship care of lymphoma survivors to aid the future planning of cancer survivorship care and overcome barriers to participation. METHODS: Data were generated in a hematological ward during 4 months of ethnographic fieldwork, including participant observation and 46...

  1. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  2. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  3. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  4. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  5. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  6. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  7. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  8. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  9. Normalization of satellite imagery

    Science.gov (United States)

    Kim, Hongsuk H.; Elman, Gregory C.

    1990-01-01

    Sets of Thematic Mapper (TM) imagery taken over the Washington, DC metropolitan area during the months of November, March and May were converted into a form of ground reflectance imagery. This conversion was accomplished by adjusting the incident sunlight and view angles and by applying a pixel-by-pixel correction for atmospheric effects. Seasonal color changes of the area can be better observed when such normalization is applied to space imagery taken in time series. In normalized imagery, the grey scale depicts variations in surface reflectance and tonal signature of multi-band color imagery can be directly interpreted for quantitative information of the target.

  10. The effect of bridge exercise accompanied by the abdominal drawing-in maneuver on an unstable support surface on the lumbar stability of normal adults.

    Science.gov (United States)

    Gong, Wontae

    2015-01-01

    [Purpose] The present study sought to investigate the influence on static and dynamic lumbar stability of bridge exercise accompanied by an abdominal drawing-in maneuver (ADIM) performed on an uneven support surface. [Subjects] A total of 30 participants were divided into an experimental group (15 participants) and a control group (15 participants). [Methods] The experimental group performed bridge exercise on an unstable surface, whereas the control group performed bridge exercise on a stable surface. The respective bridge exercises were performed for 30 minutes, 3 times per week, for 6 weeks. The static lumbar stability (SLS) and dynamic lumbar stability (DLS) of both the experimental group and the control group were measured using a pressure biofeedback unit. [Results] In the comparison of the initial and final results of the experimental and control groups, only the SLS and DLS of the experimental group were found to be statistically significant. [Conclusion] The results of the present study show that when using bridge exercise to improve SLS and DLS, performing the bridge exercise accompanied by ADIM on an uneven surface is more effective than performing the exercise on a stable surface.

  11. Limbal Fibroblasts Maintain Normal Phenotype in 3D RAFT Tissue Equivalents Suggesting Potential for Safe Clinical Use in Treatment of Ocular Surface Failure.

    Science.gov (United States)

    Massie, Isobel; Dale, Sarah B; Daniels, Julie T

    2015-06-01

    Limbal epithelial stem cell deficiency can cause blindness, but transplantation of these cells on a carrier such as human amniotic membrane can restore vision. Unfortunately, clinical graft manufacture using amnion can be inconsistent. Therefore, we have developed an alternative substrate, Real Architecture for 3D Tissue (RAFT), which supports human limbal epithelial cells (hLE) expansion. Epithelial organization is improved when human limbal fibroblasts (hLF) are incorporated into RAFT tissue equivalent (TE). However, hLF have the potential to transdifferentiate into a pro-scarring cell type, which would be incompatible with therapeutic transplantation. The aim of this work was to assess the scarring phenotype of hLF in RAFT TEs in hLE+ and hLE- RAFT TEs and in nonairlifted and airlifted RAFT TEs. Diseased fibroblasts (dFib) isolated from the fibrotic conjunctivae of ocular mucous membrane pemphigoid (Oc-MMP) patients were used as a pro-scarring positive control against which hLF were compared using surrogate scarring parameters: matrix metalloproteinase (MMP) activity, de novo collagen synthesis, α-smooth muscle actin (α-SMA) expression, and transforming growth factor-β (TGF-β) secretion. Normal hLF and dFib maintained different phenotypes in RAFT TE. MMP-2 and -9 activity, de novo collagen synthesis, and α-SMA expression were all increased in dFib cf. normal hLF RAFT TEs, although TGF-β1 secretion did not differ between normal hLF and dFib RAFT TEs. Normal hLF do not progress toward a scarring-like phenotype during culture in RAFT TEs and, therefore, may be safe to include in therapeutic RAFT TE, where they can support hLE, although in vivo work is required to confirm this. dFib RAFT TEs (used in this study as a positive control) may be useful toward the development of an ex vivo disease model of Oc-MMP.

  12. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  13. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  14. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  15. Reconstruction de la surface de Fermi dans l'etat normal d'un supraconducteur a haute Tc: Une etude du transport electrique en champ magnetique intense

    Science.gov (United States)

    Le Boeuf, David

    Des mesures de resistance longitudinale et de resistance de Hall en champ magnetique intense transverse (perpendiculaire aux plans CuO2) ont ete effectuees au sein de monocristaux de YBa2Cu3Oy (YBCO) demacles, ordonnes et de grande purete, afin d'etudier l'etat fondamental des supraconducteurs a haute Tc dans le regime sous-dope. Cette etude a ete realisee en fonction du dopage et de l'orientation du courant d'excitation J par rapport a l'axe orthorhombique b de la structure cristalline. Les mesures en champ magnetique intense revelent par suppression de la supraconductivite des oscillations magnetiques des resistances longitudinale et de Hall dans YBa2Cu 3O6.51 et YBa2Cu4O8. La conformite du comportement de ces oscillations quantiques au formalisme de Lifshitz-Kosevich, apporte la preuve de l'existence d'une surface de Fermi fermee a caractere quasi-2D, abritant des quasiparticules coherentes respectant la statistique de Fermi-Dirac, dans la phase pseudogap d'YBCO. La faible frequence des oscillations quantiques, combinee avec l'etude de la partie monotone de la resistance de Hall en fonction de la temperature indique que la surface de Fermi d'YBCO sous-dope comprend une petite poche de Fermi occupee par des porteurs de charge negative. Cette particularite de la surface de Fermi dans le regime sous-dope incompatible avec les calculs de structure de bande est en fort contraste avec la structure electronique presente dans le regime surdope. Cette observation implique ainsi l'existence d'un point critique quantique dans le diagramme de phase d'YBCO, au voisinage duquel la surface de Fermi doit subir une reconstruction induite par l'etablissement d'une brisure de la symetrie de translation du reseau cristallin sous-jacent. Enfin, l'etude en fonction du dopage de la resistance de Hall et de la resistance longitudinale en champ magnetique intense suggere qu'un ordre du type onde de densite (DW) est responsable de la reconstruction de la surface de Fermi. L'analogie de

  16. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  17. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  18. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  19. Time-dependent transport of a localized surface plasmon through a linear array of metal nanoparticles: Precursor and normal mode contributions

    Science.gov (United States)

    Compaijen, P. J.; Malyshev, V. A.; Knoester, J.

    2018-02-01

    We theoretically investigate the time-dependent transport of a localized surface plasmon excitation through a linear array of identical and equidistantly spaced metal nanoparticles. Two different signals propagating through the array are found: one traveling with the group velocity of the surface plasmon polaritons of the system and damped exponentially, and the other running with the speed of light and decaying in a power-law fashion, as x-1 and x-2 for the transversal and longitudinal polarizations, respectively. The latter resembles the Sommerfeld-Brillouin forerunner and has not been identified in previous studies. The contribution of this signal dominates the plasmon transport at large distances. In addition, even though this signal is spread in the propagation direction and has the lateral dimension larger than the wavelength, the field profile close to the chain axis does not change with distance, indicating that this part of the signal is confined to the array.

  20. Transmission line theory for long plasma production by radio frequency discharges between parallel-plate electrodes

    International Nuclear Information System (INIS)

    Nonaka, S.

    1991-01-01

    In order to seek for a radio frequency (RF) eigen-mode of waves in producing a plasma between a pair of long dielectric-covered parallel-plate RF electrodes, this paper analyzed all normal modes propagating along the electrodes by solving Maxwell's equations. The result showed that only an odd surface wave mode will produce the plasma in usual experimental conditions, which will become a basic transmission line theory when use of such long electrodes for on-line mass-production of amorphous silicon solar cells

  1. Walking on a moving surface: energy-optimal walking motions on a shaky bridge and a shaking treadmill can reduce energy costs below normal.

    Science.gov (United States)

    Joshi, Varun; Srinivasan, Manoj

    2015-02-08

    Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations.

  2. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  3. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  4. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  5. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    International Nuclear Information System (INIS)

    Tintera, Jaroslav; Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter

    2004-01-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  6. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Tintera, Jaroslav [Institute for Clinical and Experimental Medicine, Prague (Czech Republic); Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter [University Clinic Mainz, Institute of Neuroradiology, Mainz (Germany)

    2004-12-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  7. Electron acceleration by surface plasma waves in double metal surface structure

    Science.gov (United States)

    Liu, C. S.; Kumar, Gagan; Singh, D. B.; Tripathi, V. K.

    2007-12-01

    Two parallel metal sheets, separated by a vacuum region, support a surface plasma wave whose amplitude is maximum on the two parallel interfaces and minimum in the middle. This mode can be excited by a laser using a glass prism. An electron beam launched into the middle region experiences a longitudinal ponderomotive force due to the surface plasma wave and gets accelerated to velocities of the order of phase velocity of the surface wave. The scheme is viable to achieve beams of tens of keV energy. In the case of a surface plasma wave excited on a single metal-vacuum interface, the field gradient normal to the interface pushes the electrons away from the high field region, limiting the acceleration process. The acceleration energy thus achieved is in agreement with the experimental observations.

  8. Effect of Exercise-induced Sweating on facial sebum, stratum corneum hydration, and skin surface pH in normal population.

    Science.gov (United States)

    Wang, Siyu; Zhang, Guirong; Meng, Huimin; Li, Li

    2013-02-01

    Evidence demonstrated that sweat was an important factor affecting skin physiological properties. We intended to assess the effects of exercise-induced sweating on the sebum, stratum corneum (SC) hydration and skin surface pH of facial skin. 102 subjects (aged 5-60, divided into five groups) were enrolled to be measured by a combination device called 'Derma Unit SSC3' in their frontal and zygomatic regions when they were in a resting state (RS), at the beginning of sweating (BS), during excessive sweating (ES) and an hour after sweating (AS), respectively. Compared to the RS, SC hydration in both regions increased at the BS or during ES, and sebum increased at the BS but lower during ES. Compared to during ES, Sebum increased in AS but lower than RS. Compared to the RS, pH decreased in both regions at the BS in the majority of groups, and increased in frontal region during ES and in zygomatic region in the AS. There was an increase in pH in both regions during ES in the majority of groups compared to the BS, but a decrease in the AS compared to during ES. The study implies that even in summer, after we sweat excessively, lipid products should be applied locally in order to maintain stability of the barrier function of the SC. The study suggests that after a short term(1 h or less) of self adjustment, excessive sweat from moderate exercise will not impair the primary acidic surface pH of the facial skin. Exercise-induced sweating significantly affected the skin physiological properties of facial region. © 2012 John Wiley & Sons A/S.

  9. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  10. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  11. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  12. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  13. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  14. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  15. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  16. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  17. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Massively parallel fabrication of repetitive nanostructures: nanolithography for nanoarrays

    International Nuclear Information System (INIS)

    Luttge, Regina

    2009-01-01

    This topical review provides an overview of nanolithographic techniques for nanoarrays. Using patterning techniques such as lithography, normally we aim for a higher order architecture similarly to functional systems in nature. Inspired by the wealth of complexity in nature, these architectures are translated into technical devices, for example, found in integrated circuitry or other systems in which structural elements work as discrete building blocks in microdevices. Ordered artificial nanostructures (arrays of pillars, holes and wires) have shown particular properties and bring about the opportunity to modify and tune the device operation. Moreover, these nanostructures deliver new applications, for example, the nanoscale control of spin direction within a nanomagnet. Subsequently, we can look for applications where this unique property of the smallest manufactured element is repetitively used such as, for example with respect to spin, in nanopatterned magnetic media for data storage. These nanostructures are generally called nanoarrays. Most of these applications require massively parallel produced nanopatterns which can be directly realized by laser interference (areas up to 4 cm 2 are easily achieved with a Lloyd's mirror set-up). In this topical review we will further highlight the application of laser interference as a tool for nanofabrication, its limitations and ultimate advantages towards a variety of devices including nanostructuring for photonic crystal devices, high resolution patterned media and surface modifications of medical implants. The unique properties of nanostructured surfaces have also found applications in biomedical nanoarrays used either for diagnostic or functional assays including catalytic reactions on chip. Bio-inspired templated nanoarrays will be presented in perspective to other massively parallel nanolithography techniques currently discussed in the scientific literature. (topical review)

  19. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  20. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  1. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  2. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  3. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  4. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  5. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  6. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  7. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  8. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  9. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...

  10. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  11. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  12. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  13. Power stability methods for parallel systems

    International Nuclear Information System (INIS)

    Wallach, Y.

    1988-01-01

    Parallel-Processing Systems are already commercially available. This paper shows that if one of them - the Alternating Sequential Parallel, or ASP system - is applied to network stability calculations it will lead to a higher speed of solution. The ASP system is first described and is then shown to be cheaper, more reliable and available than other parallel systems. Also, no deadlock need be feared and the speedup is normally very high. A number of ASP systems were already assembled (the SMS systems, Topps, DIRMU etc.). At present, an IBM Local Area Network is being modified so that it too can work in the ASP mode. Existing ASP systems were programmed in Fortran or assembly language. Since newer systems (e.g. DIRMU) are programmed in Modula-2, this language can be used. Stability analysis is based on solving nonlinear differential and algebraic equations. The algorithm for solving the nonlinear differential equations on ASP, is described and programmed in Modula-2. The speedup is computed and is shown to be almost optimal

  14. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  15. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  16. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  17. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  18. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  19. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  20. Recent progress in 3D EM/EM-PIC simulation with ARGUS and parallel ARGUS

    International Nuclear Information System (INIS)

    Mankofsky, A.; Petillo, J.; Krueger, W.; Mondelli, A.; McNamara, B.; Philp, R.

    1994-01-01

    ARGUS is an integrated, 3-D, volumetric simulation model for systems involving electric and magnetic fields and charged particles, including materials embedded in the simulation region. The code offers the capability to carry out time domain and frequency domain electromagnetic simulations of complex physical systems. ARGUS offers a boolean solid model structure input capability that can include essentially arbitrary structures on the computational domain, and a modular architecture that allows multiple physics packages to access the same data structure and to share common code utilities. Physics modules are in place to compute electrostatic and electromagnetic fields, the normal modes of RF structures, and self-consistent particle-in-cell (PIC) simulation in either a time dependent mode or a steady state mode. The PIC modules include multiple particle species, the Lorentz equations of motion, and algorithms for the creation of particles by emission from material surfaces, injection onto the grid, and ionization. In this paper, we present an updated overview of ARGUS, with particular emphasis given in recent algorithmic and computational advances. These include a completely rewritten frequency domain solver which efficiently treats lossy materials and periodic structures, a parallel version of ARGUS with support for both shared memory parallel vector (i.e. CRAY) machines and distributed memory massively parallel MIMD systems, and numerous new applications of the code

  1. Parallel Libraries to support High-Level Programming

    DEFF Research Database (Denmark)

    Larsen, Morten Nørgaard

    and the Microsoft .NET iv framework. Normally, one would not directly think of the .NET framework when talking scientific applications, but Microsoft has in the last couple of versions of .NET introduce a number of tools for writing parallel and high performance code. The first section examines how programmers can...

  2. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  3. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  4. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  5. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  6. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  7. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  8. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  9. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  10. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  11. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  12. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  13. A vibrating wire parallel to a high temperature superconducting slab. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Saif, A G; El-sabagh, M A [Department of Mathematic and Theoretical physics, Nuclear Research Center, Atomic Energy Authority, Cairo (Egypt)

    1996-03-01

    The power losses problem for an idealized high temperature type II superconducting system of a simple geometry is studied. This system is composed of a vibrating normal conducting wire (two wires) carrying a direct current parallel to an uniaxial anisotropic type II superconducting slab (moving slab). First, the electromagnetic equation governing the dynamics of this system, and its solutions are obtained. Secondly, a modified anisotropic london equation is developed to study these systems in the case of the slab moving. Thirdly, it is found that, the power losses is dependent on the frequency, london penetration depth, permeability, conductivity, velocity, and the distance between the normal conductors and the surfaces of the superconducting slab. Moreover, the power losses decreases as the distance between the normal conductors and the surface of the superconducting slab decreases; and increases as the frequency, the london penetration depth, permeability, conductivity, and velocity are increased. These losses along the versor of the anisotropy axis is increased as {lambda}{sub |}| increases. Moreover, it is greater than the power losses along the crystal symmetry direction. In the isotropic case as well as the slab thickness tends to infinity, agreement with previous results are obtained. 2 figs.

  14. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  15. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  16. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  17. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  18. A 5-year prospective radiographic evaluation of marginal bone levels adjacent to parallel-screw cylinder machined-neck implants and rough-surfaced microthreaded implants using digitized panoramic radiographs.

    Science.gov (United States)

    Nickenig, Hans-Joachim; Wichmann, Manfred; Happe, Arndt; Zöller, Joachim E; Eitner, Stephan

    2013-10-01

    The purpose of this split-mouth study was to compare macro- and microstructure implant surfaces at the marginal bone level over five years of functional loading. From January to February 2006, 133 implants (70 rough-surfaced microthreaded implants and 63 machined-neck implants) were inserted in the mandible of 34 patients with Kennedy Class I residual dentitions and followed until December 2011. Marginal bone level was radiographically determined at six time points: implant placement (baseline), after the healing period, after six months, and at two years, three years, and five years follow-up. Median follow-up time was 5.2 years (range: 5.1-5.4). The machined-neck group had a mean crestal bone loss of 0.5 mm (0.0-2.3) after the healing period, 1.1 mm (0.0-3.0) at two years follow-up, and 1.4 mm (0.0-2.9) at five years follow-up. The rough-surfaced microthreaded implant group had a mean bone loss of 0.1 mm (-0.4 to 2.0) after the healing period, 0.5 mm (0.0-2.1) at two years follow-up, and 0.7 mm (0.0-2.3) at five years follow-up. The two implant types showed significant differences in marginal bone levels. Rough-surfaced microthreaded design caused significantly less loss of crestal bone levels under long-term functional loading in the mandible when compared to machined-neck implants. Copyright © 2012 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  19. A high performance parallel approach to medical imaging

    International Nuclear Information System (INIS)

    Frieder, G.; Frieder, O.; Stytz, M.R.

    1988-01-01

    Research into medical imaging using general purpose parallel processing architectures is described and a review of the performance of previous medical imaging machines is provided. Results demonstrating that general purpose parallel architectures can achieve performance comparable to other, specialized, medical imaging machine architectures is presented. A new back-to-front hidden-surface removal algorithm is described. Results demonstrating the computational savings obtained by using the modified back-to-front hidden-surface removal algorithm are presented. Performance figures for forming a full-scale medical image on a mesh interconnected multiprocessor are presented

  20. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  1. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  2. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  3. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  4. Thin-Section Diffusion-Weighted Magnetic Resonance Imaging of the Brain with Parallel Imaging

    International Nuclear Information System (INIS)

    Oner, A.Y.; Celik, H.; Tali, T.; Akpek, S.; Tokgoz, N.

    2007-01-01

    Background: Thin-section diffusion-weighted imaging (DWI) is known to improve lesion detectability, with long imaging time as a drawback. Parallel imaging (PI) is a technique that takes advantage of spatial sensitivity information inherent in an array of multiple-receiver surface coils to partially replace time-consuming spatial encoding and reduce imaging time. Purpose: To prospectively evaluate a 3-mm-thin-section DWI technique combined with PI by means of qualitative and quantitative measurements. Material and Methods: 30 patients underwent conventional echo-planar (EPI) DWI (5-mm section thickness, 1-mm intersection gap) without parallel imaging, and thin-section EPI-DWI with PI (3-mm section thickness, 0-mm intersection gap) for a b value of 1000 s/mm 2 , with an imaging time of 40 and 80 s, respectively. Signal-to-noise ratio (SNR), relative signal intensity (rSI), and apparent diffusion coefficient (ADC) values were measured over a lesion-free cerebral region on both series by two radiologists. A quality score was assigned for each set of images to assess the image quality. When a brain lesion was present, contrast-to-noise ratio (CNR) and corresponding ADC were also measured. Student t-tests were used for statistical analysis. Results: Mean SNR values of the normal brain were 33.61±4.35 and 32.98±7.19 for conventional and thin-slice DWI (P>0.05), respectively. Relative signal intensities were significantly higher on thin-section DWI (P 0.05). Quality scores and overall lesion CNR were found to be higher in thin-section DWI with parallel imaging. Conclusion: A thin-section technique combined with PI improves rSI, CNR, and image quality without compromising SNR and ADC measurements in an acceptable imaging time. Keywords: Brain; DWI; parallel imaging; thin section

  5. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  6. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  7. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  8. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  9. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  10. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  11. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  12. Normal foot and ankle

    International Nuclear Information System (INIS)

    Weissman, S.D.

    1989-01-01

    The foot may be thought of as a bag of bones tied tightly together and functioning as a unit. The bones re expected to maintain their alignment without causing symptomatology to the patient. The author discusses a normal radiograph. The bones must have normal shape and normal alignment. The density of the soft tissues should be normal and there should be no fractures, tumors, or foreign bodies

  13. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  14. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  15. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  16. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  17. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  18. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  19. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  20. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  1. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  2. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  3. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  4. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  5. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  6. Pseudo--Normals for Signed Distance Computation

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Bærentzen, Jakob Andreas

    2003-01-01

    the relation of a point to a mesh. At the vertices and edges of a triangle mesh, the surface is not \\$C\\^1\\$ continuous. Hence, the normal is undefined at these loci. Thürmer and Wüthrich proposed the \\$\\backslash\\$emph{angle weighted pseudo--normal} as a way to deal with this problem. In this paper, we...

  7. Precaval retropancreatic space: Normal anatomy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeon Hee; Kim, Ki Whang; Kim, Myung Jin; Yoo, Hyung Sik; Lee, Jong Tae [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    1992-07-15

    The authors defined precaval retropancreatic space as the space between pancreatic head with portal vein and IVC and analyzed the CT findings of this space to know the normal structures and size in this space. We evaluated 100 cases of normal abdominal CT scan to find out normal anatomic structures of precaval retropancreatic space retrospectively. We also measured the distance between these structures and calculated the minimum, maximum and mean values. At the splenoportal confluence level, normal structures between portal vein and IVC were vessel (21%), lymph node (19%), and caudate lobe of liver (2%) in order of frequency. The maximum AP diameter of portocaval lymph node was 4 mm. Common bile duct (CBD) was seen in 44% and the diameter was mean 3 mm and maximum 11 mm. CBD was located in extrapancreatic (75%) and lateral (60.6%) to pancreatic head. At IVC-left renal vein level, the maximum distance between CBD and IVC was 5 mm and the structure between posterior pancreatic surface and IVC was only fat tissue. Knowledge of these normal structures and measurement will be helpful in differentiating pancreatic mass with retropancreatic mass such as lymphadenopathy.

  8. Surface energy and surface stress on vicinals by revisiting the Shuttleworth relation

    Science.gov (United States)

    Hecquet, Pascal

    2018-04-01

    In 1998 [Surf. Sci. 412/413, 639 (1998)], we showed that the step stress on vicinals varies as 1/L, L being the distance between steps, while the inter-step interaction energy primarily follows the law as 1/L2 from the well-known Marchenko-Parshin model. In this paper, we give a better understanding of the interaction term of the step stress. The step stress is calculated with respect to the nominal surface stress. Consequently, we calculate the diagonal surface stresses in both the vicinal system (x, y, z) where z is normal to the vicinal and the projected system (x, b, c) where b is normal to the nominal terrace. Moreover, we calculate the surface stresses by using two methods: the first called the 'Zero' method, from the surface pressure forces and the second called the 'One' method, by homogeneously deforming the vicinal in the parallel direction, x or y, and by calculating the surface energy excess proportional to the deformation. By using the 'One' method on the vicinal Cu(0 1 M), we find that the step deformations, due to the applied deformation, vary as 1/L by the same factor for the tensor directions bb and cb, and by twice the same factor for the parallel direction yy. Due to the vanishing of the surface stress normal to the vicinal, the variation of the step stress in the direction yy is better described by using only the step deformation in the same direction. We revisit the Shuttleworth formula, for while the variation of the step stress in the direction xx is the same between the two methods, the variation in the direction yy is higher by 76% for the 'Zero' method with respect to the 'One' method. In addition to the step energy, we confirm that the variation of the step stress must be taken into account for the understanding of the equilibrium of vicinals when they are not deformed.

  9. Surface spectra of Weyl semimetals through self-adjoint extensions

    Science.gov (United States)

    Seradjeh, Babak; Vennettilli, Michael

    2018-02-01

    We apply the method of self-adjoint extensions of Hermitian operators to the low-energy, continuum Hamiltonians of Weyl semimetals in bounded geometries and derive the spectrum of the surface states on the boundary. This allows for the full characterization of boundary conditions and the surface spectra on surfaces both normal to the Weyl node separation as well as parallel to it. We show that the boundary conditions for quadratic bulk dispersions are, in general, specified by a U (2 ) matrix relating the wave function and its derivatives normal to the surface. We give a general procedure to obtain the surface spectra from these boundary conditions and derive them in specific cases of bulk dispersion. We consider the role of global symmetries in the boundary conditions and their effect on the surface spectrum. We point out several interesting features of the surface spectra for different choices of boundary conditions, such as a Mexican-hat shaped dispersion on the surface normal to Weyl node separation. We find that the existence of bound states, Fermi arcs, and the shape of their dispersion, depend on the choice of boundary conditions. This illustrates the importance of the physics at and near the boundaries in the general statement of bulk-boundary correspondence.

  10. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  11. Effect of parallel electric fields on the whistler mode wave propagation in the magnetosphere

    International Nuclear Information System (INIS)

    Gupta, G.P.; Singh, R.N.

    1975-01-01

    The effect of parallel electric fields on whistler mode wave propagation has been studied. To account for the parallel electric fields, the dispersion equation has been analyzed, and refractive index surfaces for magnetospheric plasma have been constructed. The presence of parallel electric fields deforms the refractive index surfaces which diffuse the energy flow and produce defocusing of the whistler mode waves. The parallel electric field induces an instability in the whistler mode waves propagating through the magnetosphere. The growth or decay of whistler mode instability depends on the direction of parallel electric fields. It is concluded that the analyses of whistler wave records received on the ground should account for the role of parallel electric fields

  12. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  13. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  14. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  15. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  16. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  17. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  18. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  19. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  20. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  1. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  2. Optimising a parallel conjugate gradient solver

    Energy Technology Data Exchange (ETDEWEB)

    Field, M.R. [O`Reilly Institute, Dublin (Ireland)

    1996-12-31

    This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

  3. Multipactor saturation in parallel-plate waveguides

    International Nuclear Information System (INIS)

    Sorolla, E.; Mattes, M.

    2012-01-01

    The saturation stage of a multipactor discharge is considered of interest, since it can guide towards a criterion to assess the multipactor onset. The electron cloud under multipactor regime within a parallel-plate waveguide is modeled by a thin continuous distribution of charge and the equations of motion are calculated taking into account the space charge effects. The saturation is identified by the interaction of the electron cloud with its image charge. The stability of the electron population growth is analyzed and two mechanisms of saturation to explain the steady-state multipactor for voltages near above the threshold onset are identified. The impact energy in the collision against the metal plates decreases during the electron population growth due to the attraction of the electron sheet on the image through the initial plate. When this growth remains stable till the impact energy reaches the first cross-over point, the electron surface density tends to a constant value. When the stability is broken before reaching the first cross-over point the surface charge density oscillates chaotically bounded within a certain range. In this case, an expression to calculate the maximum electron surface charge density is found whose predictions agree with the simulations when the voltage is not too high.

  4. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  5. Excitations Élémentaires au Voisinage de la Surface de Séparation d'un Métal Normal et d'un Métal Supraconducteur

    Science.gov (United States)

    Saint-James, Par D.

    On étudie le spectre d'excitation pour une couche de métal normal déposée sur un supraconducteur. On montre que si l'interaction attractive électron-électron est négligeable dans le métal normal, il n'y a pas de gap d'énergie dans le spectre d'excitation, même si l'épaisseur de la couche normale est petite. Une étude analogue, conduisant à une conclusion similaire, est menée pour deux supraconducteurs accolés et pour des sphères de métal normal baignant dans un supraconducteur. L'effet prévu pourrait expliquer quelques résultats particuliers observés dans des mesures d'effet tunnel dans des supraconducteurs durs. The excitation spectrum of a layer of normal metal (N) deposited on a superconducting substrate (S) is discussed. It is shown that if the electron-electron attractive interaction is negligibly small in (N) there is no energy gap in the excitation spectrum even if the thickness of the layer (N) is small. A similar study, with equivalent conclusions, has been carried out for two superconductors and for normal metal spheres embedded in a superconductor. The effect may possibly explain some peculiar results of tunnelling experiments on hard superconductors.

  6. Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Yu, E-mail: yufudufe@gmail.com [Dongbei University of Finance and Economics, School of Mathematics and Quantitative Economics (China)

    2013-12-15

    In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces.

  7. Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces

    International Nuclear Information System (INIS)

    Fu, Yu

    2013-01-01

    In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces

  8. Baby Poop: What's Normal?

    Science.gov (United States)

    ... I'm breast-feeding my newborn and her bowel movements are yellow and mushy. Is this normal for baby poop? Answers from Jay L. Hoecker, M.D. Yellow, mushy bowel movements are perfectly normal for breast-fed babies. Still, ...

  9. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  10. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  11. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  12. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  13. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  14. Making nuclear 'normal'

    International Nuclear Information System (INIS)

    Haehlen, Peter; Elmiger, Bruno

    2000-01-01

    The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be

  15. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  16. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  17. Trampoline motions in Xe-graphite(0 0 0 1) surface scattering

    Science.gov (United States)

    Watanabe, Yoshimasa; Yamaguchi, Hiroki; Hashinokuchi, Michihiro; Sawabe, Kyoichi; Maruyama, Shigeo; Matsumoto, Yoichiro; Shobatake, Kosuke

    2005-09-01

    We have investigated Xe scattering from the graphite(0 0 0 1) surface at hyperthermal incident energies using a molecular beam-surface scattering technique and molecular dynamics simulations. For all incident conditions, the incident Xe atom conserves the momentum parallel to the surface and loses approximately 80% of the normal incident energy. The weak interlayer potential of graphite disperses the deformation over the wide range of a graphene sheet. The dynamic corrugation induced by the collision is smooth even at hyperthermal incident energy; the graphene sheet moves like a trampoline net and the Xe atom like a trampoliner.

  18. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  19. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  20. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  1. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  2. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  3. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  4. Parallel Fabrication and Optoelectronic Characterization of Nanostructured Surfaces

    National Research Council Canada - National Science Library

    Douglas, Kenneth

    2002-01-01

    .... This has been performed without the need for silicon nitride layers or multi-layered resists. (2) We have conducted experiments using a closed-loop MM to measure the coefficient of thermal expansion...

  5. Minimal surfaces in symmetric spaces with parallel second ...

    Indian Academy of Sciences (India)

    Xiaoxiang Jiao

    2017-07-31

    Jul 31, 2017 ... space and its non-compact dual by totally real, totally complex, and invariant immersions. ... frame fields, let θ1,θ2 and ω1,...,ωn be their dual frames. ... where ˜∇ is the induced connection of the pull-back bundle f. −1. T(N), which is defined by. ˜∇X W = ¯∇ f∗ X W for W ∈ f. −1. T(N) and X ∈ T(M). Let f∗(ei ) ...

  6. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  7. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  8. Normal Pressure Hydrocephalus

    Science.gov (United States)

    ... improves the chance of a good recovery. Without treatment, symptoms may worsen and cause death. What research is being done? The NINDS conducts and supports research on neurological disorders, including normal pressure hydrocephalus. Research on disorders such ...

  9. Normality in Analytical Psychology

    Science.gov (United States)

    Myers, Steve

    2013-01-01

    Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity. PMID:25379262

  10. Normal pressure hydrocephalus

    Science.gov (United States)

    Hydrocephalus - occult; Hydrocephalus - idiopathic; Hydrocephalus - adult; Hydrocephalus - communicating; Dementia - hydrocephalus; NPH ... Ferri FF. Normal pressure hydrocephalus. In: Ferri FF, ed. ... Elsevier; 2016:chap 648. Rosenberg GA. Brain edema and disorders ...

  11. Normal Functioning Family

    Science.gov (United States)

    ... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...

  12. Normal growth and development

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/002456.htm Normal growth and development To use the sharing features on this page, please enable JavaScript. A child's growth and development can be divided into four periods: ...

  13. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  14. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  15. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  16. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  17. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  18. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  19. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  20. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  1. Parallel magnetic field perturbations in gyrokinetic simulations

    International Nuclear Information System (INIS)

    Joiner, N.; Hirose, A.; Dorland, W.

    2010-01-01

    At low β it is common to neglect parallel magnetic field perturbations on the basis that they are of order β 2 . This is only true if effects of order β are canceled by a term in the ∇B drift also of order β[H. L. Berk and R. R. Dominguez, J. Plasma Phys. 18, 31 (1977)]. To our knowledge this has not been rigorously tested with modern gyrokinetic codes. In this work we use the gyrokinetic code GS2[Kotschenreuther et al., Comput. Phys. Commun. 88, 128 (1995)] to investigate whether the compressional magnetic field perturbation B || is required for accurate gyrokinetic simulations at low β for microinstabilities commonly found in tokamaks. The kinetic ballooning mode (KBM) demonstrates the principle described by Berk and Dominguez strongly, as does the trapped electron mode, in a less dramatic way. The ion and electron temperature gradient (ETG) driven modes do not typically exhibit this behavior; the effects of B || are found to depend on the pressure gradients. The terms which are seen to cancel at long wavelength in KBM calculations can be cumulative in the ion temperature gradient case and increase with η e . The effect of B || on the ETG instability is shown to depend on the normalized pressure gradient β ' at constant β.

  2. Bianchi surfaces: integrability in an arbitrary parametrization

    International Nuclear Information System (INIS)

    Nieszporski, Maciej; Sym, Antoni

    2009-01-01

    We discuss integrability of normal field equations of arbitrarily parametrized Bianchi surfaces. A geometric definition of the Bianchi surfaces is presented as well as the Baecklund transformation for the normal field equations in an arbitrarily chosen surface parametrization.

  3. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    Science.gov (United States)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  4. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  5. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  6. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  7. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  8. Linear parallel processing machines I

    Energy Technology Data Exchange (ETDEWEB)

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  9. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  10. Electron transfer in gas surface collisions

    International Nuclear Information System (INIS)

    Wunnik, J.N.M. van.

    1983-01-01

    In this thesis electron transfer between atoms and metal surfaces in general is discussed and the negative ionization of hydrogen by scattering protons at a cesiated crystalline tungsten (110) surface in particular. Experimental results and a novel theoretical analysis are presented. In Chapter I a theoretical overview of resonant electron transitions between atoms and metals is given. In the first part of chapter II atom-metal electron transitions at a fixed atom-metal distance are described on the basis of a model developed by Gadzuk. In the second part the influence of the motion of the atom on the atomic charge state is incorporated. Measurements presented in chapter III show a strong dependence of the fraction of negatively charged H atoms scattered at cesiated tungsten, on the normal as well as the parallel velocity component. In chapter IV the proposed mechanism for the parallel velocity effect is incorporated in the amplitude method. The scattering process of protons incident under grazing angles on a cesium covered surface is studied in chapter V. (Auth.)

  11. Monitoring the normal body

    DEFF Research Database (Denmark)

    Nissen, Nina Konstantin; Holm, Lotte; Baarts, Charlotte

    2015-01-01

    of practices for monitoring their bodies based on different kinds of calculations of weight and body size, observations of body shape, and measurements of bodily firmness. Biometric measurements are familiar to them as are health authorities' recommendations. Despite not belonging to an extreme BMI category...... provides us with knowledge about how to prevent future overweight or obesity. This paper investigates body size ideals and monitoring practices among normal-weight and moderately overweight people. Methods : The study is based on in-depth interviews combined with observations. 24 participants were...... recruited by strategic sampling based on self-reported BMI 18.5-29.9 kg/m2 and socio-demographic factors. Inductive analysis was conducted. Results : Normal-weight and moderately overweight people have clear ideals for their body size. Despite being normal weight or close to this, they construct a variety...

  12. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  13. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  14. Normal modified stable processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process......This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...

  15. The normal holonomy group

    International Nuclear Information System (INIS)

    Olmos, C.

    1990-05-01

    The restricted holonomy group of a Riemannian manifold is a compact Lie group and its representation on the tangent space is a product of irreducible representations and a trivial one. Each one of the non-trivial factors is either an orthogonal representation of a connected compact Lie group which acts transitively on the unit sphere or it is the isotropy representation of a single Riemannian symmetric space of rank ≥ 2. We prove that, all these properties are also true for the representation on the normal space of the restricted normal holonomy group of any submanifold of a space of constant curvature. 4 refs

  16. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  17. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  18. Fast Evaluation of Segmentation Quality with Parallel Computing

    Directory of Open Access Journals (Sweden)

    Henry Cruz

    2017-01-01

    Full Text Available In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one.

  19. Sharing of nonlinear load in parallel-connected three-phase converters

    DEFF Research Database (Denmark)

    Borup, Uffe; Blaabjerg, Frede; Enjeti, Prasad N.

    2001-01-01

    compensation are connected in parallel. Without the new solution, they are normally not able to distinguish the harmonic currents that flow to the load and harmonic currents that circulate between the converters. Analysis and experimental results on two 90-kVA 400-Hz converters in parallel are presented......In this paper, a new control method is presented which enables equal sharing of linear and nonlinear loads in three-phase power converters connected in parallel, without communication between the converters. The paper focuses on solving the problem that arises when two converters with harmonic....... The results show that both linear and nonlinear loads can be shared equally by the proposed concept....

  20. Normality in Analytical Psychology

    Directory of Open Access Journals (Sweden)

    Steve Myers

    2013-11-01

    Full Text Available Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity.

  1. Medically-enhanced normality

    DEFF Research Database (Denmark)

    Møldrup, Claus; Traulsen, Janine Morgall; Almarsdóttir, Anna Birna

    2003-01-01

    Objective: To consider public perspectives on the use of medicines for non-medical purposes, a usage called medically-enhanced normality (MEN). Method: Examples from the literature were combined with empirical data derived from two Danish research projects: a Delphi internet study and a Telebus...

  2. The Normal Fetal Pancreas.

    Science.gov (United States)

    Kivilevitch, Zvi; Achiron, Reuven; Perlman, Sharon; Gilboa, Yinon

    2017-10-01

    The aim of the study was to assess the sonographic feasibility of measuring the fetal pancreas and its normal development throughout pregnancy. We conducted a cross-sectional prospective study between 19 and 36 weeks' gestation. The study included singleton pregnancies with normal pregnancy follow-up. The pancreas circumference was measured. The first 90 cases were tested to assess feasibility. Two hundred ninety-seven fetuses of nondiabetic mothers were recruited during a 3-year period. The overall satisfactory visualization rate was 61.6%. The intraobserver and interobserver variability had high interclass correlation coefficients of of 0.964 and 0.967, respectively. A cubic polynomial regression described best the correlation of pancreas circumference with gestational age (r = 0.744; P pancreas circumference percentiles for each week of gestation were calculated. During the study period, we detected 2 cases with overgrowth syndrome and 1 case with an annular pancreas. In this study, we assessed the feasibility of sonography for measuring the fetal pancreas and established a normal reference range for the fetal pancreas circumference throughout pregnancy. This database can be helpful when investigating fetomaternal disorders that can involve its normal development. © 2017 by the American Institute of Ultrasound in Medicine.

  3. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  4. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  5. About the geometry of the Earth geodetic reference surfaces

    Science.gov (United States)

    Husár, Ladislav; Švaral, Peter; Janák, Juraj

    2017-10-01

    The paper focuses on the comparison of metrics of three most common reference surfaces of the Earth used in geodesy (excluding the plane which also belongs to reference surfaces used in geodesy when dealing with small areas): a sphere, an ellipsoid of revolution and a triaxial ellipsoid. The two latter surfaces are treated in a more detailed way. First, the mathematical form of the metric tensors using three types of coordinates is derived and the lengths of meridian and parallel arcs between the two types of ellipsoids are compared. Three kinds of parallels, according to the type of latitude, can be defined on a triaxial ellipsoid. We show that two types of parallels are spatial curves and one is represented by ellipses. The differences of curvature of both kinds of ellipsoid are analysed using the normal curvature radii. Priority of the chosen triaxial ellipsoid is documented by its better fit with respect to the high-degree geoid model EIGEN6c4 computed up to degree and order 2160.

  6. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  7. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  8. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  9. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  10. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  11. Possible origin and significance of extension-parallel drainages in Arizona's metamophic core complexes

    Science.gov (United States)

    Spencer, J.E.

    2000-01-01

    The corrugated form of the Harcuvar, South Mountains, and Catalina metamorphic core complexes in Arizona reflects the shape of the middle Tertiary extensional detachment fault that projects over each complex. Corrugation axes are approximately parallel to the fault-displacement direction and to the footwall mylonitic lineation. The core complexes are locally incised by enigmatic, linear drainages that parallel corrugation axes and the inferred extension direction and are especially conspicuous on the crests of antiformal corrugations. These drainages have been attributed to erosional incision on a freshly denuded, planar, inclined fault ramp followed by folding that elevated and preserved some drainages on the crests of rising antiforms. According to this hypothesis, corrugations were produced by folding after subacrial exposure of detachment-fault foot-walls. An alternative hypothesis, proposed here, is as follows. In a setting where preexisting drainages cross an active normal fault, each fault-slip event will cut each drainage into two segments separated by a freshly denuded fault ramp. The upper and lower drainage segments will remain hydraulically linked after each fault-slip event if the drainage in the hanging-wall block is incised, even if the stream is on the flank of an antiformal corrugation and there is a large component of strike-slip fault movement. Maintenance of hydraulic linkage during sequential fault-slip events will guide the lengthening stream down the fault ramp as the ramp is uncovered, and stream incision will form a progressively lengthening, extension-parallel, linear drainage segment. This mechanism for linear drainage genesis is compatible with corrugations as original irregularities of the detachment fault, and does not require folding after early to middle Miocene footwall exhumations. This is desirable because many drainages are incised into nonmylonitic crystalline footwall rocks that were probably not folded under low

  12. Parallel Prediction of Stock Volatility

    Directory of Open Access Journals (Sweden)

    Priscilla Jenq

    2017-10-01

    Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

  13. Vectoring of parallel synthetic jets

    Science.gov (United States)

    Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume

    2015-11-01

    A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

  14. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  15. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  16. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  17. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  18. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  19. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  20. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  1. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  2. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  3. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  4. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  5. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  6. Idiopathic Normal Pressure Hydrocephalus

    Directory of Open Access Journals (Sweden)

    Basant R. Nassar BS

    2016-04-01

    Full Text Available Idiopathic normal pressure hydrocephalus (iNPH is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” “gait disturbances,” “cognitive function,” “neuropsychology,” “imaging,” and “pathogenesis,” articles were obtained for this review. The majority of the articles were retrieved from the past 10 years. The purpose of this review article is to aid general practitioners in further understanding current findings on the pathogenesis, diagnosis, and treatment of iNPH.

  7. Normal Weight Dyslipidemia

    DEFF Research Database (Denmark)

    Ipsen, David Hojland; Tveden-Nyborg, Pernille; Lykkesfeldt, Jens

    2016-01-01

    Objective: The liver coordinates lipid metabolism and may play a vital role in the development of dyslipidemia, even in the absence of obesity. Normal weight dyslipidemia (NWD) and patients with nonalcoholic fatty liver disease (NAFLD) who do not have obesity constitute a unique subset...... of individuals characterized by dyslipidemia and metabolic deterioration. This review examined the available literature on the role of the liver in dyslipidemia and the metabolic characteristics of patients with NAFLD who do not have obesity. Methods: PubMed was searched using the following keywords: nonobese......, dyslipidemia, NAFLD, NWD, liver, and metabolically obese/unhealthy normal weight. Additionally, article bibliographies were screened, and relevant citations were retrieved. Studies were excluded if they had not measured relevant biomarkers of dyslipidemia. Results: NWD and NAFLD without obesity share a similar...

  8. A parallel form of the Gudjonsson Suggestibility Scale.

    Science.gov (United States)

    Gudjonsson, G H

    1987-09-01

    The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.

  9. Parallelization of pressure equation solver for incompressible N-S equations

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi; Yokokawa, Mitsuo; Kaburaki, Hideo.

    1996-03-01

    A pressure equation solver in a code for 3-dimensional incompressible flow analysis has been parallelized by using red-black SOR method and PCG method on Fujitsu VPP500, a vector parallel computer with distributed memory. For the comparison of scalability, the solver using the red-black SOR method has been also parallelized on the Intel Paragon, a scalar parallel computer with a distributed memory. The scalability of the red-black SOR method on both VPP500 and Paragon was lost, when number of processor elements was increased. The reason of non-scalability on both systems is increasing communication time between processor elements. In addition, the parallelization by DO-loop division makes the vectorizing efficiency lower on VPP500. For an effective implementation on VPP500, a large scale problem which holds very long vectorized DO-loops in the parallel program should be solved. PCG method with red-black SOR method applied to incomplete LU factorization (red-black PCG) has more iteration steps than normal PCG method with forward and backward substitution, in spite of same number of the floating point operations in a DO-loop of incomplete LU factorization. The parallelized red-black PCG method has less merits than the parallelized red-black SOR method when the computational region has fewer grids, because the low vectorization efficiency is obtained in red-black PCG method. (author)

  10. Ethics and "normal birth".

    Science.gov (United States)

    Lyerly, Anne Drapkin

    2012-12-01

    The concept of "normal birth" has been promoted as ideal by several international organizations, although debate about its meaning is ongoing. In this article, I examine the concept of normalcy to explore its ethical implications and raise a trio of concerns. First, in its emphasis on nonuse of technology as a goal, the concept of normalcy may marginalize women for whom medical intervention is necessary or beneficial. Second, in its emphasis on birth as a socially meaningful event, the mantra of normalcy may unintentionally avert attention to meaning in medically complicated births. Third, the emphasis on birth as a normal and healthy event may be a contributor to the long-standing tolerance for the dearth of evidence guiding the treatment of illness during pregnancy and the failure to responsibly and productively engage pregnant women in health research. Given these concerns, it is worth debating not just what "normal birth" means, but whether the term as an ideal earns its keep. © 2012, Copyright the Authors Journal compilation © 2012, Wiley Periodicals, Inc.

  11. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. Comparative ultrasound measurement of normal thyroid gland ...

    African Journals Online (AJOL)

    2011-08-31

    Aug 31, 2011 ... the normal thyroid gland has a homogenous increased medium level echo texture. The childhood thyroid gland dimension correlates linearly with age and body surface unlike adults. [14] Iodothyronine (T3) and thyroxine (T4) are thyroid hormones which function to control the basal metabolic rate (BMR).

  13. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  14. Transport of radiolabelled glycoprotein to cell surface and lysosome-like bodies of absorptive cells in cultured small-intestinal tissue from normal subjects and patients with a lysosomal storage disease

    International Nuclear Information System (INIS)

    Ginsel, L.A.; Onderwater, J.J.M.; Daems, W.T.

    1979-01-01

    The transport of 3 H-fucose and 3 H-glucosamine-labelled glycoproteins in the absorptive cells of cultured human small-intestinal tissue was investigated with light- and electron-microscopical autoradiography. The findings showed that these glycoproteins were completed in the Golgi apparatus and transported in small vesicular structures to the apical cytoplasm of these cells. Since this material arrived in the cell coat on the microvilli and in the lysosome-like bodies simultaneously, a crinophagic function of these organelles in the regulation of the transport or secretion of cell-coat material was supported. In the absorptive cells of patients with fucosidosis or Hunter's type of lysosomal storage disease, a similar transport of cell-coat material to the lysosome-like bodies and a congenital defect of a lysosomal hydrolase normally involved in the degradation of cell-coat material, can explain the accumulation of this material in the dense bodies. (orig.) [de

  15. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  16. Marginal Assessment of Crowns by the Aid of Parallel Radiography

    Directory of Open Access Journals (Sweden)

    Farnaz Fattahi

    2015-03-01

    Full Text Available Introduction: Marginal adaptation is the most critical item in long-term prognosis of single crowns. This study aimed to assess the marginal quality as well asthe discrepancies in marginal integrity of some PFM single crowns of posterior teeth by employing parallel radiography in Shiraz Dental School, Shiraz, Iran. Methods: In this descriptive study, parallel radiographies were taken from 200 fabricated PFM single crowns of posterior teeth after cementation and before discharging the patient. To calculate the magnification of the images, a metallic sphere with the thickness of 4 mm was placed in the direction of the crown margin on the occlusal surface. Thereafter, the horizontal and vertical space between the crown margins, the margin of preparations and also the vertical space between the crown margin and the bone crest were measured by using digital radiological software. Results: Analysis of data by descriptive statistics revealed that 75.5% and 60% of the cases had more than the acceptable space (50µm in the vertical (130±20µm and horizontal (90±15µm dimensions, respectively. Moreover, 85% of patients were found to have either horizontal or vertical gap. In 77% of cases, the margins of crowns invaded the biologic width in the mesial and 70% in distal surfaces. Conclusion: Parallel radiography can be expedient in the stage of framework try-in to yield some important information that cannot be obtained by routine clinical evaluations and may improve the treatment prognosis

  17. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang

    2014-09-26

    Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

  18. The island dynamics model on parallel quadtree grids

    Science.gov (United States)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  19. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  20. The convergence of parallel Boltzmann machines

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Eckmiller, R.; Hartmann, G.; Hauske, G.

    1990-01-01

    We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

  1. Customizable Memory Schemes for Data Parallel Architectures

    NARCIS (Netherlands)

    Gou, C.

    2011-01-01

    Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

  2. Parallel Narrative Structure in Paul Harding's "Tinkers"

    Science.gov (United States)

    Çirakli, Mustafa Zeki

    2014-01-01

    The present paper explores the implications of parallel narrative structure in Paul Harding's "Tinkers" (2009). Besides primarily recounting the two sets of parallel narratives, "Tinkers" also comprises of seemingly unrelated fragments such as excerpts from clock repair manuals and diaries. The main stories, however, told…

  3. Streaming nested data parallelism on multicores

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2016-01-01

    The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

  4. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  5. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  6. 17 CFR 12.24 - Parallel proceedings.

    Science.gov (United States)

    2010-04-01

    ...) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration proceeding... the receivership includes the resolution of claims made by customers; or (3) A petition filed under... any of the foregoing with knowledge of a parallel proceeding shall promptly notify the Commission, by...

  7. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  8. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  9. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  10. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  11. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  12. Parallel knock-out schemes in networks

    NARCIS (Netherlands)

    Broersma, H.J.; Fomin, F.V.; Woeginger, G.J.

    2004-01-01

    We consider parallel knock-out schemes, a procedure on graphs introduced by Lampert and Slater in 1997 in which each vertex eliminates exactly one of its neighbors in each round. We are considering cases in which after a finite number of rounds, where the minimimum number is called the parallel

  13. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  14. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  15. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  16. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  17. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  18. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  19. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  20. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  1. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  2. Parallel Readout of Optical Disks

    Science.gov (United States)

    1992-08-01

    r(x,y) is the apparent reflectance function of the disk surface including the phase error. The illuminat - ing optics should be chosen so that Er(x,y...of the light uniformly illuminat - ing the chip, Ap = 474\\im 2 is the area of photodiode, and rs is the time required to switch the synapses. Figure...reference beam that is incident from the right. Once the hologram is recorded the input is blocked and the disk is illuminat - ed. Lens LI takes the

  3. 2D-RBUC for efficient parallel compression of residuals

    Science.gov (United States)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  4. HVI Ballistic Performance Characterization of Non-Parallel Walls

    Science.gov (United States)

    Bohl, William; Miller, Joshua; Christiansen, Eric

    2012-01-01

    The Double-Wall, "Whipple" Shield [1] has been the subject of many hypervelocity impact studies and has proven to be an effective shield system for Micro-Meteoroid and Orbital Debris (MMOD) impacts for spacecraft. The US modules of the International Space Station (ISS), with their "bumper shields" offset from their pressure holding rear walls provide good examples of effective on-orbit use of the double wall shield. The concentric cylinder shield configuration with its large radius of curvature relative to separation distance is easily and effectively represented for testing and analysis as a system of two parallel plates. The parallel plate double wall configuration has been heavily tested and characterized for shield performance for normal and oblique impacts for the ISS and other programs. The double wall shield and principally similar Stuffed Whipple Shield are very common shield types for MMOD protection. However, in some locations with many spacecraft designs, the rear wall cannot be modeled as being parallel or concentric with the outer bumper wall. As represented in Figure 1, there is an included angle between the two walls. And, with a cylindrical outer wall, the effective included angle constantly changes. This complicates assessment of critical spacecraft components located within outer spacecraft walls when using software tools such as NASA's BumperII. In addition, the validity of the risk assessment comes into question when using the standard double wall shield equations, especially since verification testing of every set of double wall included angles is impossible.

  5. Arcing and surface damage in DITE

    International Nuclear Information System (INIS)

    Goodall, D.H.J.; McCracken, G.M.

    1977-11-01

    An investigation into the arcing damage on surfaces exposed to plasmas in the DITE tokamak is described. It has been found that arcing occurs on the fixed limiters, on probes inserted into the plasma and on parts of the torus structure. For surfaces parallel to the toroidal field most of the arcs run across the surface orthogonal to the field direction. Observations in the scanning electron microscope show that the arc tracks are formed by a series of melted craters characteristic of cathode arc spots. The amount of metal removed from the surface is consistent with the concentration of metal observed in the plasma. In plasmas with hydrogen gas puffing during the discharge or with injection of low Z impurities, the arc tracks are observed to be much shallower than in normal low density discharges. Several types of surface damage other than arc tracks have also been observed on probes. These phenomena occur less frequently than arcing and appear to be associated with abnormal discharge conditions. (author)

  6. MR imaging of the ankle: Normal variants

    International Nuclear Information System (INIS)

    Noto, A.M.; Cheung, Y.; Rosenberg, Z.S.; Norman, A.; Leeds, N.E.

    1987-01-01

    Thirty asymptomatic ankles were studied with high-resolution surface coil MR imaging. The thirty ankles were reviewed for identification or normal structures. The MR appearance of the deltoid and posterior to talo-fibular ligaments, peroneous brevis and longus tendons, and posterior aspect of the tibial-talar joint demonstrated several normal variants not previously described. These should not be misinterpreted as pathologic processes. The specific findings included (1) cortical irregularity of the posterior tibial-talar joint in 27 of 30 cases which should not be mistaken for osteonecrois; (2) normal posterior talo-fibular ligament with irregular and frayed inhomogeneity, which represents a normal variant in seven of ten cases; and (3) fluid in the shared peroneal tendons sheath which may be confused for a longitudinal tendon tear in three of 30 cases. Ankle imaging with the use of MR is still a relatively new procedure. Further investigation is needed to better define normal anatomy as well as normal variants. The authors described several structures that normally present with variable MR imaging appearances. This is clinically significant in order to maintain a high sensitivity and specificity in MR imaging interpretation

  7. Molecular cloning of complementary DNAs encoding the heavy chain of the human 4F2 cell-surface antigen: a type II membrane glycoprotein involved in normal and neoplastic cell growth

    International Nuclear Information System (INIS)

    Quackenbush, E.; Clabby, M.; Gottesdiener, K.M.; Barbosa, J.; Jones, N.H.; Strominger, J.L.; Speck, S.; Leiden, J.M.

    1987-01-01

    Complementary DNA (cDNA) clones encoding the heavy chain of the heterodimeric human membrane glycoprotein 4F2 have been isolated by immunoscreening of a λgt11 expression library. The identity of these clones has been confirmed by hybridization to RNA and DNA prepared from mouse L-cell transfectants, which were produced by whole cell gene transfer and selected for cell-surface expression of the human 4F2 heavy chain. DNA sequence analysis suggest that the 4F2 heavy-chain cDNAs encode an approximately 526-amino acid type II membrane glycoprotein, which is composed of a large C-terminal extracellular domain, a single potential transmembrane region, and a 50-81 amino acid N-terminal intracytoplasmic domain. Southern blotting experiments have shown that the 4F2 heavy-chain cDNAs are derived from a single-copy gene that has been highly conserved during mammalian evolution

  8. A study of objective functions for organs with parallel and serial architecture

    International Nuclear Information System (INIS)

    Stavrev, P.V.; Stavreva, N.A.; Round, W.H.

    1997-01-01

    An objective function analysis when target volumes are deliberately enlarged to account for tumour mobility and consecutive uncertainty in the tumour position in external beam radiotherapy has been carried out. The dose distribution inside the tumour is assumed to have logarithmic dependence on the tumour cell density which assures an iso-local tumour control probability. The normal tissue immediately surrounding the tumour is irradiated homogeneously at a dose level equal to the dose D(R)) delivered at the edge of the tumour The normal tissue in the high dose field is modelled as being organized in identical functional subunits (FSUs) composed of a relatively large number of cells. Two types of organs - having serial and parallel architecture are considered. Implicit averaging over intrapatient normal tissue radiosensitivity variations is done. A function describing the normal tissue survival probability S 0 is constructed. The objective function is given as a product of the total tumour control probability (TCP) and the normal tissue survival probability S 0 . The values of the dose D(R)) which result in a maximum of the objective function are obtained for different combinations of tumour and normal tissue parameters, such as tumour and normal tissue radiosensitivities, number of cells constituting a normal tissue functional unit, total number of normal cells under high dose (D(R)) exposure and functional reserve for organs having parallel architecture. The corresponding TCP and S 0 values are computed and discussed. (authors)

  9. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  10. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  11. State of the art of parallel scientific visualization applications on PC clusters

    International Nuclear Information System (INIS)

    Juliachs, M.

    2004-01-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  12. Momentum-energy transport from turbulence driven by parallel flow shear

    International Nuclear Information System (INIS)

    Dong, J.Q.; Horton, W.; Bengtson, R.D.; Li, G.X.

    1994-04-01

    The low frequency E x B turbulence driven by the shear in the mass flow velocity parallel to the magnetic field is studied using the fluid theory in a slab configuration with magnetic shear. Ion temperature gradient effects are taken into account. The eigenfunctions of the linear instability are asymmetric about the mode rational surfaces. Quasilinear Reynolds stress induced by such asymmetric fluctuations produces momentum and energy transport across the magnetic field. Analytic formulas for the parallel and perpendicular Reynolds stress, viscosity and energy transport coefficients are given. Experimental observations of the parallel and poloidal plasma flows on TEXT-U are presented and compared with the theoretical models

  13. Normalization for Implied Volatility

    OpenAIRE

    Fukasawa, Masaaki

    2010-01-01

    We study specific nonlinear transformations of the Black-Scholes implied volatility to show remarkable properties of the volatility surface. Model-free bounds on the implied volatility skew are given. Pricing formulas for the European options which are written in terms of the implied volatility are given. In particular, we prove elegant formulas for the fair strikes of the variance swap and the gamma swap.

  14. Position Analysis of a Hybrid Serial-Parallel Manipulator in Immersion Lithography

    Directory of Open Access Journals (Sweden)

    Jie-jie Shao

    2015-01-01

    Full Text Available This paper proposes a novel hybrid serial-parallel mechanism with 6 degrees of freedom. The new mechanism combines two different parallel modules in a serial form. 3-P̲(PH parallel module is architecture of 3 degrees of freedom based on higher joints and specializes in describing two planes’ relative pose. 3-P̲SP parallel module is typical architecture which has been widely investigated in recent researches. In this paper, the direct-inverse position problems of the 3-P̲SP parallel module in the couple mixed-type mode are analyzed in detail, and the solutions are obtained in an analytical form. Furthermore, the solutions for the direct and inverse position problems of the novel hybrid serial-parallel mechanism are also derived and obtained in the analytical form. The proposed hybrid serial-parallel mechanism is applied to regulate the immersion hood’s pose in an immersion lithography system. Through measuring and regulating the pose of the immersion hood with respect to the wafer surface simultaneously, the immersion hood can track the wafer surface’s pose in real-time and the gap status is stabilized. This is another exploration to hybrid serial-parallel mechanism’s application.

  15. The kpx, a program analyzer for parallelization

    International Nuclear Information System (INIS)

    Matsuyama, Yuji; Orii, Shigeo; Ota, Toshiro; Kume, Etsuo; Aikawa, Hiroshi.

    1997-03-01

    The kpx is a program analyzer, developed as a common technological basis for promoting parallel processing. The kpx consists of three tools. The first is ktool, that shows how much execution time is spent in program segments. The second is ptool, that shows parallelization overhead on the Paragon system. The last is xtool, that shows parallelization overhead on the VPP system. The kpx, designed to work for any FORTRAN cord on any UNIX computer, is confirmed to work well after testing on Paragon, SP2, SR2201, VPP500, VPP300, Monte-4, SX-4 and T90. (author)

  16. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  17. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  18. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  19. Speedup predictions on large scientific parallel programs

    International Nuclear Information System (INIS)

    Williams, E.; Bobrowicz, F.

    1985-01-01

    How much speedup can we expect for large scientific parallel programs running on supercomputers. For insight into this problem we extend the parallel processing environment currently existing on the Cray X-MP (a shared memory multiprocessor with at most four processors) to a simulated N-processor environment, where N greater than or equal to 1. Several large scientific parallel programs from Los Alamos National Laboratory were run in this simulated environment, and speedups were predicted. A speedup of 14.4 on 16 processors was measured for one of the three most used codes at the Laboratory

  20. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  1. Distributed parallel messaging for multiprocessor systems

    Science.gov (United States)

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  2. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  3. Multilayer Relaxation and Surface Energies of Metallic Surfaces

    Science.gov (United States)

    Bozzolo, Guillermo; Rodriguez, Agustin M.; Ferrante, John

    1994-01-01

    The perpendicular and parallel multilayer relaxations of fcc (210) surfaces are studied using equivalent crystal theory (ECT). A comparison with experimental and theoretical results is made for AI(210). The effect of uncertainties in the input parameters on the magnitudes and ordering of surface relaxations for this semiempirical method is estimated. A new measure of surface roughness is proposed. Predictions for the multilayer relaxations and surface energies of the (210) face of Cu and Ni are also included.

  4. Theory of normal metals

    International Nuclear Information System (INIS)

    Mahan, G.D.

    1992-01-01

    The organizers requested that I give eight lectures on the theory of normal metals, ''with an eye on superconductivity.'' My job was to cover the general properties of metals. The topics were selected according to what the students would need to known for the following lectures on superconductivity. My role was to prepare the ground work for the later lectures. The problem is that there is not yet a widely accepted theory for the mechanism which pairs the electrons. Many mechanisms have been proposed, with those of phonons and spin fluctuations having the most followers. So I tried to discuss both topics. I also introduced the tight-binding model for metals, which forms the basis for most of the work on the cuprate superconductors

  5. Xe ion beam induced rippled structures on differently oriented single-crystalline Si surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Hanisch, Antje; Grenzer, Joerg; Facsko, Stefan [Forschungszentrum Dresden-Rossendorf, Institut fuer Ionenstrahlphysik und Materialforschung, PO Box 510119, 01314 Dresden (Germany); Biermanns, Andreas; Pietsch, Ullrich, E-mail: A.Hanisch@fzd.d [Universitaet Siegen, Festkoerperphysik, 57068 Siegen (Germany)

    2010-03-24

    We report on Xe{sup +} induced ripple formation at medium energy on single-crystalline silicon surfaces of different orientations using substrates with an intentional miscut from the [0 0 1] direction and a [1 1 1] oriented wafer. The ion beam incidence angle with respect to the surface normal was kept fixed at 65{sup 0} and the ion beam projection was parallel or perpendicular to the [1 1 0] direction. By a combination of atomic force microscopy, x-ray diffraction and high-resolution transmission electron microscopy we found that the features of the surface and subsurface rippled structures such as ripple wavelength and amplitude and the degree of order do not depend on the surface orientation as assumed in recent models of pattern formation for semiconductor surfaces. (fast track communication)

  6. Anisotropic behaviour of transmission through thin superconducting NbN film in parallel magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Šindler, M., E-mail: sindler@fzu.cz [Institute of Physics ASCR, v. v. i., Cukrovarnická 10, CZ-162 53 Praha 6 (Czech Republic); Tesař, R. [Institute of Physics ASCR, v. v. i., Cukrovarnická 10, CZ-162 53 Praha 6 (Czech Republic); Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, CZ-121 16 Praha (Czech Republic); Koláček, J. [Institute of Physics ASCR, v. v. i., Cukrovarnická 10, CZ-162 53 Praha 6 (Czech Republic); Skrbek, L. [Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, CZ-121 16 Praha (Czech Republic)

    2017-02-15

    Highlights: • Transmission through thin NbN film in parallel magnetic field exhibits strong anisotropic behaviour in the terahertz range. • Response for a polarisation parallel with the applied field is given as weighted sum of superconducting and normal state contributions. • Effective medium approach fails to describe response for linear polarisation perpendicular to the applied magnetic field. - Abstract: Transmission of terahertz waves through a thin layer of the superconductor NbN deposited on an anisotropic R-cut sapphire substrate is studied as a function of temperature in a magnetic field oriented parallel with the sample. A significant difference is found between transmitted intensities of beams linearly polarised parallel with and perpendicular to the direction of applied magnetic field.

  7. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  8. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...... also illustrate the use of our algorithms in a new LP algorithm....

  9. Friction of hydrogels with controlled surface roughness on solid flat substrates.

    Science.gov (United States)

    Yashima, Shintaro; Takase, Natsuko; Kurokawa, Takayuki; Gong, Jian Ping

    2014-05-14

    This study investigated the effect of hydrogel surface roughness on its sliding friction against a solid substrate having modestly adhesive interaction with hydrogels under small normal pressure in water. The friction test was performed between bulk polyacrylamide hydrogels of varied surface roughness and a smooth glass substrate by using a strain-controlled rheometer with parallel-plates geometry. At small pressure (normal strain 1.4-3.6%), the flat surface gel showed a poor reproducibility in friction. In contrast, the gels with a surface roughness of 1-10 μm order showed well reproducible friction behaviors and their frictional stress was larger than that of the flat surface hydrogel. Furthermore, the flat gel showed an elasto-hydrodynamic transition while the rough gels showed a monotonous decrease of friction with velocity. The difference between the flat surface and the rough surface diminished with the increase of the normal pressure. These phenomena are associated with the different contact behaviors of these soft hydrogels in liquid, as revealed by the observation of the interface using a confocal laser microscope.

  10. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  11. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus; Kenzel, Michael; Kainz, Bernhard K.; Mü ller, Jö rg; Wonka, Peter; Schmalstieg, Dieter

    2014-01-01

    they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies

  12. New high voltage parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Kawasumi, Y.; Masai, K.; Iguchi, H.; Fujisawa, A.; Abe, Y.

    1992-01-01

    A new modification on the parallel plate analyzer for 500 keV heavy ions to eliminate the effect of the intense UV and visible radiations, is successfully conducted. Its principle and results are discussed. (author)

  13. Parallel data encryption with RSA algorithm

    OpenAIRE

    Неретин, А. А.

    2016-01-01

    In this paper a parallel RSA algorithm with preliminary shuffling of source text was presented.Dependence of an encryption speed on the number of encryption nodes has been analysed, The proposed algorithm was implemented on C# language.

  14. Data parallel sorting for particle simulation

    Science.gov (United States)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  15. Parallel debt in the Serbian finance law

    Directory of Open Access Journals (Sweden)

    Kuzman Miloš

    2014-01-01

    Full Text Available The purpose of this paper is to present the mechanism of parallel debt in the Serbian financial law. While considering whether the mechanism of parallel debt exists under the Serbian law, the Anglo-Saxon mechanism of trust is represented. Hence it is explained why the mechanism of trust is not allowed under the Serbian law. Further on, the mechanism of parallel debt is introduced as well as a debate on permissibility of its cause in the Serbian law. Comparative legal arguments about this issue are also presented in this paper. In conclusion, the author suggests that on the basis of the conclusions drawn in this paper, the parallel debt mechanism is to be declared admissible if it is ever taken into consideration by the Serbian courts.

  16. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.; He, Z.; Xiao, M.; Zhang, Z.

    2014-01-01

    is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI

  17. Stranger than fiction: parallel universes beguile science

    CERN Multimedia

    2007-01-01

    We may not be able - at least not yet - to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of effeaded imagination. (1/2 page)

  18. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  19. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  20. Implementing Shared Memory Parallelism in MCBEND

    Directory of Open Access Journals (Sweden)

    Bird Adam

    2017-01-01

    Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.