WorldWideScience

Sample records for surface normal parallel

  1. Inductively Modeling Parallel, Normal, and Frictional Forces

    Science.gov (United States)

    Wyrembeck, Edward P.

    2005-02-01

    This year, instead of resolving the weight mg of an object resting on an incline into force components parallel and perpendicular to the surface of the incline, I asked my students to actually measure these forces at various angles of inclination and graph the data. I wanted my students to inductively discover mg sin θ and mg cos θ, and to use these graphs to confront the passive nature of the static frictional force. I believe the graphs themselves are very powerful conceptual tools that are often never discovered and used by students who only learn to use equations at specific angles to solve specific quantitative problems.

  2. The role of bed-parallel slip in the development of complex normal fault zones

    Science.gov (United States)

    Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros

    2017-04-01

    Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.

  3. Parallel Surfaces of Spacelike Ruled Weingarten Surfaces in Minkowski 3-space

    Directory of Open Access Journals (Sweden)

    Yasin Ünlütürk

    2013-03-01

    Full Text Available In this work, it is shown that parallel surfaces of spacelike ruled surfaces which are developable are spacelike ruled Weingarten surfaces. It is also shown that parallel surfaces of non-developable ruled Weingarten surfaces are again Weingarten surfaces. Finally, some properties of that kind parallel surfaces are obtained in Minkowski 3-space.

  4. Euler characteristic and quadrilaterals of normal surfaces

    Indian Academy of Sciences (India)

    In particular, if F is an oriented, closed and connected normal surface of genus g, g ≤. 7. 2. Q. DEFINITION 1.2. Let F be a normal surface in M. Let t be a normal triangle of F that lies in a tetrahedron . The triangle t is said to link a vertex v of if t separates ∂ into two disks such that the disk containing v has no other vertices of .

  5. Parallel H.263 Encoder in Normal Coding Mode

    OpenAIRE

    Cosmas, J; Paker, Y; Pearmain, A

    1998-01-01

    A parallel H.263 video encoder, which utilises spatial para1 elism, has been modelled using a multi-threaded program. Spatial parallelism is a technique where an image is subdivided into equal parts (as far as physically possible) and each part is proces!;ed by a separate processor by computing motion and texture mding with all processors cach acting on a different part of thc ]mag. This method leads to a performance increase, which is roughly in proportion to the number ...

  6. Surface tree languages and parallel derivation trees

    NARCIS (Netherlands)

    Engelfriet, Joost

    1976-01-01

    The surface tree languages obtained by top-down finite state transformation of monadic trees are exactly the frontier-preserving homomorphic images of sets of derivation trees of ETOL systems. The corresponding class of tree transformation languages is therefore equal to the class of ETOL languages.

  7. Parallel optical trap assisted nanopatterning on rough surfaces

    International Nuclear Information System (INIS)

    Tsai, Y-C; Fardel, R; Arnold, C B; Leitz, K-H; Schmidt, M; Otto, A

    2012-01-01

    There exist many optical lithography techniques for generating nanostructures on hard, flat surfaces over large areas. However, few techniques are able to create such patterns on soft materials or surfaces with pre-existing structure. To address this need, we demonstrate the use of parallel optical trap assisted nanopatterning (OTAN) to provide an efficient and robust direct-write method of producing nanoscale features without the need for focal plane adjustment. Parallel patterning on model surfaces of polyimide with vertical steps greater than 1.5 µm shows a feature size uncertainty better than 4% across the step and lateral positional accuracy of 25 nm. A Brownian motion model is used to describe the positional accuracy enabling one to predict how variation in system parameters will affect the nanopatterning results. These combined results suggest that OTAN is a viable technique for massively parallel direct-write nanolithography on non-traditional surfaces. (paper)

  8. Minimal surfaces in symmetric spaces with parallel second ...

    Indian Academy of Sciences (India)

    Xiaoxiang Jiao

    2017-07-31

    Jul 31, 2017 ... Abstract. In this paper, we study geometry of isometric minimal immersions of. Riemannian surfaces in a symmetric space by moving frames and prove that the Gaussian curvature must be constant if the immersion is of parallel second fundamental form. In particular, when the surface is S2, we discuss the ...

  9. A curvature theory for discrete surfaces based on mesh parallelity

    KAUST Repository

    Bobenko, Alexander Ivanovich

    2009-12-18

    We consider a general theory of curvatures of discrete surfaces equipped with edgewise parallel Gauss images, and where mean and Gaussian curvatures of faces are derived from the faces\\' areas and mixed areas. Remarkably these notions are capable of unifying notable previously defined classes of surfaces, such as discrete isothermic minimal surfaces and surfaces of constant mean curvature. We discuss various types of natural Gauss images, the existence of principal curvatures, constant curvature surfaces, Christoffel duality, Koenigs nets, contact element nets, s-isothermic nets, and interesting special cases such as discrete Delaunay surfaces derived from elliptic billiards. © 2009 Springer-Verlag.

  10. Surface topography of parallel grinding process for nonaxisymmetric aspheric lens

    International Nuclear Information System (INIS)

    Zhang Ningning; Wang Zhenzhong; Pan Ri; Wang Chunjin; Guo Yinbiao

    2012-01-01

    Workpiece surface profile, texture and roughness can be predicted by modeling the topography of wheel surface and modeling kinematics of grinding process, which compose an important part of precision grinding process theory. Parallel grinding technology is an important method for nonaxisymmetric aspheric lens machining, but there is few report on relevant simulation. In this paper, a simulation method based on parallel grinding for precision machining of aspheric lens is proposed. The method combines modeling the random surface of wheel and modeling the single grain track based on arc wheel contact points. Then, a mathematical algorithm for surface topography is proposed and applied in conditions of different machining parameters. The consistence between the results of simulation and test proves that the algorithm is correct and efficient. (authors)

  11. Normal Incidence for Graded Index Surfaces

    Science.gov (United States)

    Khankhoje, Uday K.; Van Zyl, Jakob

    2011-01-01

    A plane wave is incident normally from vacuum (eta(sub 0) = 1) onto a smooth surface. The substrate has three layers; the top most layer has thickness d(sub 1) and permittivity epsilon(sub 1). The corresponding numbers for the next layer are d(sub 2); epsilon(sub 2), while the third layer which is semi-in nite has index eta(sub 3). The Hallikainen model [1] is used to relate volumetric soil moisture to the permittivity. Here, we consider the relation for the real part of the permittivity for a typical loam soil: acute epsilon(mv) = 2.8571 + 3.9678 x mv + 118:85 x mv(sup 2).

  12. Pair-breaking effects by parallel magnetic field in electric-field-induced surface superconductivity

    Science.gov (United States)

    Nabeta, Masahiro; Tanaka, Kenta K.; Onari, Seiichiro; Ichioka, Masanori

    2016-11-01

    We study paramagnetic pair-breaking in electric-field-induced surface superconductivity, when magnetic field is applied parallel to the surface. The calculation is performed by Bogoliubov-de Gennes theory with s-wave pairing, including the screening effect of electric fields by the induced carriers near the surface. Due to the Zeeman shift by applied fields, electronic states at higher-level sub-bands become normal-state-like. Therefore, the magnetic field dependence of Fermi-energy density of states reflects the multi-gap structure in the surface superconductivity.

  13. Stability analysis of rough surfaces in adhesive normal contact

    Science.gov (United States)

    Rey, Valentine; Bleyer, Jeremy

    2018-03-01

    This paper deals with adhesive frictionless normal contact between one elastic flat solid and one stiff solid with rough surface. After computation of the equilibrium solution of the energy minimization principle and respecting the contact constraints, we aim at studying the stability of this equilibrium solution. This study of stability implies solving an eigenvalue problem with inequality constraints. To achieve this goal, we propose a proximal algorithm which enables qualifying the solution as stable or unstable and that gives the instability modes. This method has a low computational cost since no linear system inversion is required and is also suitable for parallel implementation. Illustrations are given for the Hertzian contact and for rough contact.

  14. Mechanics of curved surfaces, with application to surface-parallel cracks

    Science.gov (United States)

    Martel, Stephen J.

    2011-10-01

    The surfaces of many bodies are weakened by shallow enigmatic cracks that parallel the surface. A re-formulation of the static equilibrium equations in a curvilinear reference frame shows that a tension perpendicular to a traction-free surface can arise at shallow depths even under the influence of gravity. This condition occurs if σ11k1 + σ22k2 > ρg cosβ, where k1 and k2 are the principal curvatures (negative if convex) at the surface, σ11 and σ22 are tensile (positive) or compressive (negative) stresses parallel to the respective principal curvature arcs, ρ is material density, g is gravitational acceleration, and β is the surface slope. The curvature terms do not appear in equilibrium equations in a Cartesian reference frame. Compression parallel to a convex surface thus can cause subsurface cracks to open. A quantitative test of the relationship above accounts for where sheeting joints (prominent shallow surface-parallel fractures in rock) are abundant and for where they are scarce or absent in the varied topography of Yosemite National Park, resolving key aspects of a classic problem in geology: the formation of sheeting joints. Moreover, since the equilibrium equations are independent of rheology, the relationship above can be applied to delamination or spalling caused by surface-parallel cracks in many materials.

  15. Pair-breaking effects by parallel magnetic field in electric-field-induced surface superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Nabeta, Masahiro, E-mail: nabeta@mp.okayama-u.ac.jp; Tanaka, Kenta K.; Onari, Seiichiro; Ichioka, Masanori

    2016-11-15

    Highlights: • Zeeman effect shifts superconducting gaps of sub-band system, towards pair-breaking. • Higher-level sub-bands become normal-state-like electronic states by magnetic fields. • Magnetic field dependence of zero-energy DOS reflects multi-gap superconductivity. - Abstract: We study paramagnetic pair-breaking in electric-field-induced surface superconductivity, when magnetic field is applied parallel to the surface. The calculation is performed by Bogoliubov-de Gennes theory with s-wave pairing, including the screening effect of electric fields by the induced carriers near the surface. Due to the Zeeman shift by applied fields, electronic states at higher-level sub-bands become normal-state-like. Therefore, the magnetic field dependence of Fermi-energy density of states reflects the multi-gap structure in the surface superconductivity.

  16. Pair-breaking effects by parallel magnetic field in electric-field-induced surface superconductivity

    International Nuclear Information System (INIS)

    Nabeta, Masahiro; Tanaka, Kenta K.; Onari, Seiichiro; Ichioka, Masanori

    2016-01-01

    Highlights: • Zeeman effect shifts superconducting gaps of sub-band system, towards pair-breaking. • Higher-level sub-bands become normal-state-like electronic states by magnetic fields. • Magnetic field dependence of zero-energy DOS reflects multi-gap superconductivity. - Abstract: We study paramagnetic pair-breaking in electric-field-induced surface superconductivity, when magnetic field is applied parallel to the surface. The calculation is performed by Bogoliubov-de Gennes theory with s-wave pairing, including the screening effect of electric fields by the induced carriers near the surface. Due to the Zeeman shift by applied fields, electronic states at higher-level sub-bands become normal-state-like. Therefore, the magnetic field dependence of Fermi-energy density of states reflects the multi-gap structure in the surface superconductivity.

  17. Symmetric and asymmetric capillary bridges between a rough surface and a parallel surface.

    Science.gov (United States)

    Wang, Yongxin; Michielsen, Stephen; Lee, Hoon Joo

    2013-09-03

    Although the formation of a capillary bridge between two parallel surfaces has been extensively studied, the majority of research has described only symmetric capillary bridges between two smooth surfaces. In this work, an instrument was built to form a capillary bridge by squeezing a liquid drop on one surface with another surface. An analytical solution that describes the shape of symmetric capillary bridges joining two smooth surfaces has been extended to bridges that are asymmetric about the midplane and to rough surfaces. The solution, given by elliptical integrals of the first and second kind, is consistent with a constant Laplace pressure over the entire surface and has been verified for water, Kaydol, and dodecane drops forming symmetric and asymmetric bridges between parallel smooth surfaces. This solution has been applied to asymmetric capillary bridges between a smooth surface and a rough fabric surface as well as symmetric bridges between two rough surfaces. These solutions have been experimentally verified, and good agreement has been found between predicted and experimental profiles for small drops where the effect of gravity is negligible. Finally, a protocol for determining the profile from the volume and height of the capillary bridge has been developed and experimentally verified.

  18. Experimental analysis of surface finish in normal conducting cavities

    Science.gov (United States)

    Zarrebini-Esfahani, A.; Aslaninejad, M.; Ristic, M.; Long, K.

    2017-10-01

    A normal conducting 805 MHz test cavity with an in built button shaped sample is used to conduct a series of surface treatment experiments. The button enhances the local fields and influences the likelihood of an RF breakdown event. Because of their smaller sizes, compared to the whole cavity surface, they allow practical investigations of the effects of cavity surface preparation in relation to RF breakdown. Manufacturing techniques and steps for preparing the buttons to improve the surface quality are described in detail. It was observed that even after the final stage of the surface treatment, defects on the surface of the cavities still could be found.

  19. Seasonality in onshore normalized wind profiles above the surface layer

    DEFF Research Database (Denmark)

    Nissen, Jesper Nielsen; Gryning, Sven-Erik

    2010-01-01

    This work aims to study the seasonal difference in normalized wind speed above the surface layer as it is observed at the 160 m high mast at the coastal site Høvsøre at winds from the sea (westerly). Normalized and stability averaged wind speeds above the surface layer are observed to be 20 to 50......% larger in the winter/spring seasons compared to the summer/autumn seasons at winds from west within the same atmospheric stability class. A method combining the mesoscale model, COAMPS, and observations of the surface stability of the marine boundary layer is presented. The objective of the method...... is to reconstruct the seasonal signal in normalized wind speed and identify the physical process behind. The method proved reasonably successful in capturing the relative difference in wind speed between seasons, indicating that the simulated physical processes are likely candidates to the observed seasonal signal...

  20. RPE cell surface proteins in normal and dystrophic rats

    International Nuclear Information System (INIS)

    Clark, V.M.; Hall, M.O.

    1986-01-01

    Membrane-bound proteins in plasma membrane enriched fractions from cultured rat RPE were analyzed by two-dimensional gel electrophoresis. Membrane proteins were characterized on three increasingly specific levels. Total protein was visualized by silver staining. A maximum of 102 separate proteins were counted in silver-stained gels. Glycoproteins were labeled with 3H-glucosamine or 3H-fucose and detected by autoradiography. Thirty-eight fucose-labeled and 61-71 glucosamine-labeled proteins were identified. All of the fucose-labeled proteins were labeled with glucosamine-derived radioactivity. Proteins exposed at the cell surface were labeled by lactoperoxidase-catalyzed radioiodination prior to preparation of membranes for two-dimensional analysis. Forty separate 125I-labeled surface proteins were resolved by two-dimensional electrophoresis/autoradiography. Comparison with the glycoprotein map showed that a number of these surface labeled proteins were glycoproteins. Two-dimensional maps of total protein, fucose-labeled, and glucosamine-labeled glycoproteins, and 125I-labeled surface proteins of membranes from dystrophic (RCS rdy-p+) and normal (Long Evans or RCS rdy+p+) RPE were compared. No differences in the total protein or surface-labeled proteins were observed. However, the results suggest that a 183K glycoprotein is more heavily glycosylated with glucosamine and fucose in normal RPE membranes as compared to membranes from dystrophic RPE

  1. Formation of Sheeting Joints as a Result of Compression Parallel to Convex Surfaces, With Examples from Yosemite National Park, California

    Science.gov (United States)

    Martel, S. J.

    2008-12-01

    The formation of sheeting joints has been an outstanding problem in geology. New observations and analyses indicate that sheeting joints develop in response to a near-surface tension induced by compressive stresses parallel to a convex slope (hypothesis 1) rather than by removal of overburden by erosion, as conventionally assumed (hypothesis 2). Opening mode displacements across the joints together with the absence of mineral precipitates within the joints mean that sheeting joints open in response to a near-surface tension normal to the surface rather than a pressurized fluid. Consideration of a plot of this tensile stress as a function of depth normal to the surface reveals that a true tension must arise in the shallow subsurface if the rate of that tensile stress change with depth is positive at the surface. Static equilibrium requires this rate (derivative) to equal P22 k2 + P33 k3 - ρ g cosβ, where k2 and k3 are the principal curvatures of the surface, P22 and P33 are the respective surface- parallel normal stresses along the principal curvatures, ρ is the material density, g is gravitational acceleration, and β is the slope. This derivative will be positive and sheeting joints can open if at least one principal curvature is sufficiently convex (negative) and the surface-parallel stresses are sufficiently compressive (negative). At several sites with sheeting joints (e.g., Yosemite National Park in California), the measured topographic curvatures and the measured surface-parallel stresses of about -10 MPa combine to meet this condition. In apparent violation of hypothesis 1, sheeting joints occur locally at the bottom of Tenaya Canyon, one of the deepest glaciated, U-shaped (concave) canyons in the park. The canyon-bottom sheeting joints only occur, however, where the canyon is convex downstream, a direction that nearly coincides with direction of the most compressive stress measured in the vicinity. The most compressive stress acting along the convex

  2. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  3. Growth of contact area between rough surfaces under normal stress

    Science.gov (United States)

    Stesky, R. M.; Hannan, S. S.

    1987-05-01

    The contact area between deforming rough surfaces in marble, alabaster, and quartz was measured from thin sections of surfaces bonded under load with low viscosity resin epoxy. The marble and alabaster samples had contact areas that increased with stress at an accelerating rate. This result suggests that the strength of the asperity contacts decreased progressively during the deformation, following some form of strain weakening relationship. This conclusion is supported by petrographic observation of the thin sections that indicate that much of the deformation was cataclastic, with minor twinning of calcite and kinking of gypsum. In the case of the quartz, the observed contact area was small and increased approximately linearly with normal stress. Only the irreversible cataclastic deformation was observed; however strain-induced birefringence and cracking of the epoxy, not observed with the other rocks, suggests that significant elastic deformation occurred, but recovered during unloading.

  4. Numerical analysis of surface subsidence in asymmetric parallel highway tunnels

    Directory of Open Access Journals (Sweden)

    Ratan Das

    2017-02-01

    Full Text Available Tunnelling related hazards are very common in the Himalayan terrain and a number of such instances have been reported. Several twin tunnels are being planned for transportation purposes which will require good understanding for prediction of tunnel deformation and surface settlement during the engineering life of the structure. The deformational behaviour, design of sequential excavation and support of any jointed rock mass are challenging during underground construction. We have raised several commonly assumed issues while performing stability analysis of underground opening at shallow depth. For this purpose, Kainchi-mod Nerchowck twin tunnels (Himachal Pradesh, India are taken for in-depth analysis of the stability of two asymmetric tunnels to address the influence of topography, twin tunnel dimension and geometry. The host rock encountered during excavation is composed mainly of moderately to highly jointed grey sandstone, maroon sandstone and siltstones. In contrast to equidimensional tunnels where the maximum subsidence is observed vertically above the centreline of the tunnel, the result from the present study shows shifting of the maximum subsidence away from the tunnel centreline. The maximum subsidence of 0.99 mm is observed at 4.54 m left to the escape tunnel centreline whereas the maximum subsidence of 3.14 mm is observed at 8.89 m right to the main tunnel centreline. This shifting clearly indicates the influence of undulating topography and in-equidimensional noncircular tunnel.

  5. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  6. Milestone Completion Report WBS 1.3.5.05 ECP/VTK-m FY17Q3 [MS-17/02] Faceted Surface Normals STDA05-3.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The FY17Q3 milestone of the ECP/VTK-m project includes the completion of a VTK-m filter that computes normal vectors for surfaces. Normal vectors are those that point perpendicular to the surface and are an important direction when rendering the surface. The implementation includes the parallel algorithm itself, a filter module to simplify integrating it into other software, and documentation in the VTK-m Users’ Guide. With the completion of this milestone, we are able to necessary information to rendering systems to provide appropriate shading of surfaces. This milestone also feeds into subsequent milestones that progressively improve the approximation of surface direction.

  7. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  8. Large area nanoscale patterning of silicon surfaces by parallel local oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Losilla, N S; Martinez, J; Garcia, R [Instituto de Microelectronica de Madrid, CSIC, Isaac Newton 8, 28760 Tres Cantos, Madrid (Spain)

    2009-11-25

    The homogeneity and the reproducibility of parallel local oxidation have been improved by introducing a thin film of polymethylmethacrylate (PMMA) between the stamp and the silicon surface. The flexibility of the polymer film enables a homogeneous contact of the stamp with the silicon surface to be achieved. The oxides obtained yield better aspect ratios compared with the ones created with no PMMA layer. The pattern is formed when a bias voltage is applied between the stamp and the silicon surface for 1 min. The patterning can be done by a step and repeat technique and is reproducible across a centimetre length scale. Once the oxide nanostructures have been created, the polymer is removed by etching in acetone. Finally, parallel local oxidation is applied to fabricate silicon nanostructures and templates for the growth of organic molecules.

  9. A massively parallel GPU-accelerated model for analysis of fully nonlinear free surface waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Madsen, Morten G.; Glimberg, Stefan Lemvig

    2011-01-01

    We implement and evaluate a massively parallel and scalable algorithm based on a multigrid preconditioned Defect Correction method for the simulation of fully nonlinear free surface flows. The simulations are based on a potential model that describes wave propagation over uneven bottoms in three...... space dimensions and is useful for fast analysis and prediction purposes in coastal and offshore engineering. A dedicated numerical model based on the proposed algorithm is executed in parallel by utilizing affordable modern special purpose graphics processing unit (GPU). The model is based on a low......-storage flexible-order accurate finite difference method that is known to be efficient and scalable on a CPU core (single thread). To achieve parallel performance of the relatively complex numerical model, we investigate a new trend in high-performance computing where many-core GPUs are utilized as high...

  10. Normalization.

    Science.gov (United States)

    Cuevas, Eduardo J.

    1997-01-01

    Discusses cornerstone of Montessori theory, normalization, which asserts that if a child is placed in an optimum prepared environment where inner impulses match external opportunities, the undeviated self emerges, a being totally in harmony with its surroundings. Makes distinctions regarding normalization, normalized, and normality, indicating how…

  11. Memory effect on energy losses of charged particles moving parallel to solid surface

    International Nuclear Information System (INIS)

    Kwei, C.M.; Tu, Y.H.; Hsu, Y.H.; Tung, C.J.

    2006-01-01

    Theoretical derivations were made for the induced potential and the stopping power of a charged particle moving close and parallel to the surface of a solid. It was illustrated that the induced potential produced by the interaction of particle and solid depended not only on the velocity but also on the previous velocity of the particle before its last inelastic interaction. Another words, the particle kept a memory on its previous velocity, v , in determining the stopping power for the particle of velocity v. Based on the dielectric response theory, formulas were derived for the induced potential and the stopping power with memory effect. An extended Drude dielectric function with spatial dispersion was used in the application of these formulas for a proton moving parallel to Si surface. It was found that the induced potential with memory effect lay between induced potentials without memory effect for constant velocities v and v. The memory effect was manifest as the proton changes its velocity in the previous inelastic interaction. This memory effect also reduced the stopping power of the proton. The formulas derived in the present work can be applied to any solid surface and charged particle moving with arbitrary parallel trajectory either inside or outside the solid

  12. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  13. Determination of Optimum Viewing Angles for the Angular Normalization of Land Surface Temperature over Vegetated Surface

    Directory of Open Access Journals (Sweden)

    Huazhong Ren

    2015-03-01

    Full Text Available Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF is first extended to the thermal infrared (TIR domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors.

  14. Fluid jet-array parallel machining of optical microstructure array surfaces.

    Science.gov (United States)

    Wang, Chunjin; Cheung, Chi Fai; Liu, Mingyu; Lee, Wing Bun

    2017-09-18

    Optical microstructure array surfaces such as micro-lens array surface, micro-groove array surface etc., are being used in more and more optical products, depending on its ability to produce a unique or particular performance. The geometrical complexity of the optical microstructures array surfaces makes them difficult to be fabricated. In this paper, a novel method named fluid jet-array parallel machining (FJAPM) is proposed to provide a new way to generate the microstructure array surfaces with high productivity. In this process, an array of abrasive water jets is pumped out of a nozzle, and each fluid jet simultaneously impinges the target surface to implement material removal independently. The jet-array nozzle was optimally designed firstly to diminish the effect of jet interference based on the experimental investigation on the 2-Jet nozzles with different jet intervals. The material removal and surface generation models were built and validated through the comparison of simulation and experimental results of the generation of several kinds of microstructure array surfaces. Following that, the effect of some factors in the process was discussed, including the fluid pressure, nozzle geometry, tool path, and dwell time. The experimental results and analysis prove that FJAPM process is an effective way to fabricate the optical microstructure array surface together with high productivity.

  15. Pros and cons of rotating ground motion records to fault-normal/parallel directions for response history analysis of buildings

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2014-01-01

    According to the regulatory building codes in the United States (e.g., 2010 California Building Code), at least two horizontal ground motion components are required for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHAs should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here, for the first time, using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak values of engineering demand parameters (EDPs) were computed for rotation angles ranging from 0 through 180° to quantify the difference between peak values of EDPs over all rotation angles and those due to FN/FP direction rotated motions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  16. Parallel tempering Monte Carlo simulations of lysozyme orientation on charged surfaces

    Science.gov (United States)

    Xie, Yun; Zhou, Jian; Jiang, Shaoyi

    2010-02-01

    In this work, the parallel tempering Monte Carlo (PTMC) algorithm is applied to accurately and efficiently identify the global-minimum-energy orientation of a protein adsorbed on a surface in a single simulation. When applying the PTMC method to simulate lysozyme orientation on charged surfaces, it is found that lysozyme could easily be adsorbed on negatively charged surfaces with "side-on" and "back-on" orientations. When driven by dominant electrostatic interactions, lysozyme tends to be adsorbed on negatively charged surfaces with the side-on orientation for which the active site of lysozyme faces sideways. The side-on orientation agrees well with the experimental results where the adsorbed orientation of lysozyme is determined by electrostatic interactions. As the contribution from van der Waals interactions gradually dominates, the back-on orientation becomes the preferred one. For this orientation, the active site of lysozyme faces outward, which conforms to the experimental results where the orientation of adsorbed lysozyme is co-determined by electrostatic interactions and van der Waals interactions. It is also found that despite of its net positive charge, lysozyme could be adsorbed on positively charged surfaces with both "end-on" and back-on orientations owing to the nonuniform charge distribution over lysozyme surface and the screening effect from ions in solution. The PTMC simulation method provides a way to determine the preferred orientation of proteins on surfaces for biosensor and biomaterial applications.

  17. Preliminary surface analysis of etched, bleached, and normal bovine enamel

    International Nuclear Information System (INIS)

    Ruse, N.D.; Smith, D.C.; Torneck, C.D.; Titley, K.C.

    1990-01-01

    X-ray photoelectron spectroscopic (XPS) and secondary ion-mass spectroscopic (SIMS) analyses were performed on unground un-pumiced, unground pumiced, and ground labial enamel surfaces of young bovine incisors exposed to four different treatments: (1) immersion in 35% H2O2 for 60 min; (2) immersion in 37% H3PO4 for 60 s; (3) immersion in 35% H2O2 for 60 min, in distilled water for two min, and in 37% H3PO4 for 60 s; (4) immersion in 37% H3PO4 for 60 s, in distilled water for two min, and in 35% H2O2 for 60 min. Untreated unground un-pumiced, unground pumiced, and ground enamel surfaces, as well as synthetic hydroxyapatite surfaces, served as controls for intra-tooth evaluations of the effects of different treatments. The analyses indicated that exposure to 35% H2O2 alone, besides increasing the nitrogen content, produced no other significant change in the elemental composition of any of the enamel surfaces investigated. Exposure to 37% H3PO4, however, produced a marked decrease in calcium (Ca) and phosphorus (P) concentrations and an increase in carbon (C) and nitrogen (N) concentrations in unground un-pumiced specimens only, and a decrease in C concentration in ground specimens. These results suggest that the reported decrease in the adhesive bond strength of resin to 35% H2O2-treated enamel is not caused by a change in the elemental composition of treated enamel surfaces. They also suggest that an organic-rich layer, unaffected by acid-etching, may be present on the unground un-pumiced surface of young bovine incisors. This layer can be removed by thorough pumicing or by grinding. An awareness of its presence is important when young bovine teeth are used in a model system for evaluation of resin adhesiveness

  18. Measurement of tendon reflexes by surface electromyography in normal subjects

    NARCIS (Netherlands)

    Stam, J.; van Crevel, H.

    1989-01-01

    A simple method for measuring the tendon reflexes was developed. A manually operated, electronic reflex hammer was applied that enabled measurement of the strength of tendon taps. Reflex responses were recorded by surface electromyography. Stimulus-response relations and latencies of tendon reflexes

  19. Dynamical image potential and induced forces for charged particles moving parallel to a solid surface

    International Nuclear Information System (INIS)

    Arista, N.R.

    1994-01-01

    The dynamical image potential and ensuing forces induced by a charged particle moving parallel to a solid surface are investigated by using a dielectric formulation for semi-infinite dispersive media. The adiabatic behavior of the field in the asymptotic range is discussed in a general way using a multipole expansion. Several calculations illustrate the behavior of the field using both a simple model, where the surface response is approximated by a single plasma resonance, and a more realistic representation of the medium based upon the empirical information on the optical constants for various solids (Al, Cu, Ag, and Au). The model parameters may be adjusted to provide very good agreement with the optical-data integrations of the stopping and lateral forces on the moving charge. On the other hand, important differences in the description of the wake potential using either the simple plasma resonance model, or the optical-data representation, are obtained for Cu, Ag, and Au

  20. Lining cells on normal human vertebral bone surfaces

    International Nuclear Information System (INIS)

    Henning, C.B.; Lloyd, E.L.

    1982-01-01

    Thoracic vertebrae from two individuals with no bone disease were studied with the electron microscope to determine cell morphology in relation to bone mineral. The work was undertaken to determine if cell morphology or spatial relationships between the bone lining cells and bone mineral could account for the relative infrequency of bone tumors which arise at this site following radium intake, when compared with other sites, such as the head of the femur. Cells lining the vertebral mineral were found to be generally rounded in appearance with varied numbers of cytoplasmic granules, and they appeared to have a high density per unit of surface area. These features contrasted with the single layer of flattened cells characteristic of the bone lining cells of the femur. A tentative discussion of the reasons for the relative infrequency of tumors in the vertebrae following radium acquisition is presented

  1. Solving very large scattering problems using a parallel PWTD-enhanced surface integral equation solver

    KAUST Repository

    Liu, Yang

    2013-07-01

    The computational complexity and memory requirements of multilevel plane wave time domain (PWTD)-accelerated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(NtNs(log 2)Ns) and O(Ns 1.5); here N t and Ns denote numbers of temporal and spatial basis functions discretizing the current [Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003]. In the past, serial versions of these solvers have been successfully applied to the analysis of scattering from perfect electrically conducting as well as homogeneous penetrable targets involving up to Ns ≈ 0.5 × 106 and Nt ≈ 10 3. To solve larger problems, parallel PWTD-enhanced MOT solvers are called for. Even though a simple parallelization strategy was demonstrated in the context of electromagnetic compatibility analysis [M. Lu et al., in Proc. IEEE Int. Symp. AP-S, 4, 4212-4215, 2004], by and large, progress in this area has been slow. The lack of progress can be attributed wholesale to difficulties associated with the construction of a scalable PWTD kernel. © 2013 IEEE.

  2. The Tool Path Planning for Ring Torus Optical Surface Diamond Turning with Parallel 2DOF Fast Tool Servo

    Directory of Open Access Journals (Sweden)

    Hao Aiyun

    2017-01-01

    Full Text Available FTS (Fast Tool Servo has always been an important method for manufacturing non-axisymmetric optical surface. In this paper, a novel tool path planning method is presented which plans tool path in two coordinate directions of a parallel structure 2 DOFs FTS simultaneously. Comparing with single DOF FTS, this method has significantly improved the ability of producing non-axisymmetric optical surface, such as ring torus surface.

  3. The need for surface-parallel sensor orientation to address energy balance closure on mountain slopes

    Science.gov (United States)

    Serrano-Ortiz, Penelope; Sánchez-Cañete, Enrique P.; Pérez-Priego, Óscar; Carrara, Arnaud; Metzger, Stefan; Kowalski, Andrew S.

    2014-05-01

    Measurements of turbulent fluxes in varying environments are one of the tools scientists and decision makers rely on for assessing and forecasting global warming. Thus, in the last two decades eddy-covariance (EC) towers have proliferated around the globe. Yet, ideal sites are rarely found, and there is a great need to extend the EC method and its theoretical underpinning to more complex terrain. In particular, several principal challenges are aggravated by sloping terrain. Nevertheless, various studies have concluded that the EC method is a useful tool to determine ecosystem energy and CO2/H2O fluxes on mountain slopes. Following the first law of thermodynamics, the validity of EC measurements is often evaluated in terms of their ability to close the balance of energy entering [net radiation minus the soil heat flux] and leaving [sum of the latent and sensible heat, measured by EC] an ecosystem. In sloping terrain, this criterion is applied with results comparable to sites located in more ideal terrain. Arguably, fluxes perpendicular to the surface are needed to assess the energy budget. However, even in sloping terrain instrument installations are frequently referenced perpendicular to the geo-potential (e.g. using a bubble level). Here, we demonstrate several advantages of installing the net radiometer and soil heat flux instruments parallel to a 16% slope with a southwest orientation. Our results reveal a diurnal hysteresis in the energy balance closure as large as 30% when net radiometer and soil heat flux instruments are installed perpendicular to the geo-potential. Installing the net radiometer and soil heat flux instruments slope-parallel mitigates this discrepancy.

  4. A Parallel and Optimization Approach for Land-Surface Temperature Retrieval on a Windows-Based PC Cluster

    Directory of Open Access Journals (Sweden)

    Bo Tie

    2018-02-01

    Full Text Available Land-surface temperature (LST is a very important parameter in the geosciences. Conventional LST retrieval is based on large-scale remote-sensing (RS images where split-window algorithms are usually employed via a traditional stand-alone method. When using the environment to visualize images (ENVI software to carry out LST retrieval of large time-series datasets of infrared RS images, the processing time taken for traditional stand-alone servers becomes untenable. To address this shortcoming, cluster-based parallel computing is an ideal solution. However, traditional parallel computing is mostly based on the Linux environment, while the LST algorithm developed within the ENVI interactive data language (IDL can only be run in the Windows environment in our project. To address this problem, we combine the characteristics of LST algorithms with parallel computing, and propose the design and implementation of a parallel LST retrieval algorithm using the message-passing interface (MPI parallel-programming model on a Windows-based PC cluster platform. Furthermore, we present our solutions to the problems associated with performance bottlenecks and fault tolerance during the deployment stage. Our results show that, by improving the parallel environment of the storage system and network, one can effectively solve the stability issues of the parallel environment for large-scale RS data processing.

  5. LEMming: A Linear Error Model to Normalize Parallel Quantitative Real-Time PCR (qPCR Data as an Alternative to Reference Gene Based Methods.

    Directory of Open Access Journals (Sweden)

    Ronny Feuer

    Full Text Available Gene expression analysis is an essential part of biological and medical investigations. Quantitative real-time PCR (qPCR is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored error model to overcome the necessity of stable RGs.We developed a RG independent data normalization approach based on a tailored linear error model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting

  6. Influence of the plain-parallel electrode surface dimensions on the type A measurement uncertainty of GM counter

    Directory of Open Access Journals (Sweden)

    Stanković Koviljka Đ.

    2011-01-01

    Full Text Available This paper investigates, through theory and experiment, the influence of the plain-parallel electrode surface dimensions change on the type A measurement uncertainty of a GM counter. The possibilities of applying these results to practical structures are examined by using the methods of mathematical statistics. Special attention is devoted to the influence of electrode surface enlargement on the statistical behavior of the pulse number random variable, expressed in the form of the enlargement law. In the theoretical part of the paper, the general surface enlargement law is derived. Comparison of experimental results with those predicted by the surface enlargement law proved its validity for expressing the type A measurement uncertainty of GM counters constructed with a plain-parallel electrode configuration with a homogenous electric field.

  7. Air bubble-induced detachment of polystyrene particles with different sizes from collector surfaces in a parallel plate flow chamber

    NARCIS (Netherlands)

    Gomez-Suarez, C; van der Mei, HC; Busscher, HJ

    2001-01-01

    Particle size was found to be an important factor in air bubble-induced detachment of colloidal particles from collector surfaces in a parallel plate flow chamber and generally polystyrene particles with a diameter of 806 nm detached less than particles with a diameter of 1400 nm. Particle

  8. Laser pulse transient method for measuring the normal spectral emissivity of samples with arbitrary surface quality

    Science.gov (United States)

    Jeromen, A.; Grabec, I.; Govekar, E.

    2008-09-01

    A laser pulse transient method for measuring normal spectral emissivity is described. In this method, a laser pulse ( λ=1064 nm) irradiates the top surface of a flat specimen. A two-dimensional temperature response of the bottom surface is measured with a calibrated thermographic camera. By solving an axisymmetric boundary value heat conduction problem, the normal spectral emissivity at 1064 nm is determined by using an iterative nonlinear least-squares estimation procedure. The method can be applied to arbitrary sample surface quality. The method is tested on a nickel specimen and used to determine the normal spectral emissivity of AISI 304 stainless steel. The expanded combined uncertainty of the method has been estimated to be 18%.

  9. Detachment of polystyrene particles from collector surfaces by surface tension forces induced by air-bubble passage through a parallel plate flow chamber

    NARCIS (Netherlands)

    Wit, PJ; vanderMei, HC; Busscher, HJ

    1997-01-01

    By allowing an air-bubble to pass through a parallel plate flow chamber with negatively charged, colloidal polystyrene particles adhering to the bottom collector plate of the chamber, the detachment of adhering particles stimulated by surface tension forces induced by the passage of a liquid-air

  10. Cell and fiber attachment to demineralized dentin from normal root surfaces.

    Science.gov (United States)

    Hanes, P J; Polson, A M; Ladenheim, S

    1985-12-01

    The study assessed connective tissue and epithelial responses to dentin specimens (obtained from normal roots of human teeth) after surface demineralization. Rectangular dental specimens with opposite faces of root and pulpal dentin were prepared from beneath root surfaces covered by periodontal ligament. One-half of the specimens were treated with citric acid, pH 1, for 3 minutes, while the remainder served as untreated control specimens. Specimens were implanted vertically into incisional wounds on the dorsal surface of rats with one end of the implant protruding through the skin. Four specimens in each group were available 1, 3, 5 and 10 days after implantation. Histologic and histometric analyses included counts of adhering cells, evaluation of connective tissue fiber relationships and assessment of epithelial migration. Analyses within each group comparing root and pulpal surfaces showed no differences between any of the parameters. Comparisons between experimental and control groups showed that demineralized surfaces had a greater number of cells attached, fiber attachment occurred and epithelial downgrowth was inhibited. The fiber attachment to experimental specimens differed morphologically from fiber attachment to normal root surfaces: the number of fibers attached per unit length and the diameter of attached fibers were significantly less on experimental specimens. Demineralized specimens at 10 days had a distinct eosinophilic surface zone. Surface demineralization of dentin predisposed toward a cell and fiber attachment system which inhibited migration of epithelium.

  11. Selecting the induction heating for normalization of deposited surfaces of cylindrical parts

    Directory of Open Access Journals (Sweden)

    Олена Валеріївна Бережна

    2017-07-01

    Full Text Available The machine parts recovered by electric contact surfacing with metal strip are characterized by high loading of the surface layer, which has a significant impact on their performance. Therefore, the improvement of the operational stability of fast-wearing machine parts through the use of combined treatment technologies is required. Not all the work-piece but just the worn zones are subjected to recovery with electric contact surfacing; the tape thickness and depth of the heat affected zone being not more than a few millimeters. Therefore, the most optimal in this case is the use of a local surface heating method of high frequency currents. This method has economical benefits because there is no need to heat the entire work-piece. The induction heating mode at a constant power density has been proposed and analytically investigated. The ratios that make it possible to determine the main heating parameters ensuring calculation of the inductor for the normalization of the reconstructed surface of cylindrical parts have been given. These parameters are: specific power, frequency and warm-up time. The proposed induction heating mode is intermediate between the quenching and cross-cutting heating and makes it possible to simultaneously obtain the required temperatures at the surface and at the predetermined depth of the heated layer of cylindrical parts with the normalization of their surfaces restored with electric contact surfacing

  12. Normal Contacts of Lubricated Fractal Rough Surfaces at the Atomic Scale

    NARCIS (Netherlands)

    Solhjoo, Soheil; Vakis, Antonis I.

    The friction of contacting interfaces is a function of surface roughness and applied normal load. Under boundary lubrication, this frictional behavior changes as a function of lubricant wettability, viscosity, and density, by practically decreasing the possibility of dry contact. Many studies on

  13. Surface structures of normal paraffins and cyclohexane monolayers and thin crystals grown on the (111) crystal face of platinum. A low-energy electron diffraction study

    International Nuclear Information System (INIS)

    Firment, L.E.; Somorjai, G.A.

    1977-01-01

    The surfaces of the normal paraffins (C 3 --C 8 ) and cyclohexane have been studied using low-energy electron diffraction (LEED). The samples were prepared by vapor deposition on the (111) face of a platinum single crystal in ultrahigh vacuum, and were studied both as thick films and as adsorbed monolayers. These molecules form ordered monolayers on the clean metal surface in the temperature range 100--220 K and at a vapor flux corresponding to 10 -7 Torr. In the adsorbed monolayers of the normal paraffins (C 4 --C 8 ), the molecules lie with their chain axes parallel to the Pt surface and Pt[110]. The paraffin monolayer structures undergo order--disorder transitions as a function of temperature. Multilayers condensed upon the ordered monolayers maintained the same orientation and packing as found in the monolayers. The surface structures of the growing organic crystals do not corresond to planes in their reported bulk crystal structures and are evidence for epitaxial growth of pseudomorphic crystal forms. Multilayers of n-octane and n-heptane condensed upon disordered monolayers have also grown with the (001) plane of the triclinic bulk crystal structures parallel to the surface. n-Butane has three monolayer structures on Pt(111) and one of the three is maintained during growth of the crystal. Cyclohexane forms an ordered monolayer, upon which a multilayer of cyclohexane grows exhibiting the (001) surface orientation of the monoclinic bulk crystal structure. Surface structures of saturated hydrocarbons are found to be very susceptible to electron beam induced damage. Surface charging interferes with LEED only at sample thicknesses greater than 200 A

  14. Standard Test Methods for Total Normal Emittance of Surfaces Using Inspection-Meter Techniques

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1971-01-01

    1.1 These test methods cover determination of the total normal emittance (Note) of surfaces by means of portable, inspection-meter instruments. Note 1—Total normal emittance (εN) is defined as the ratio of the normal radiance of a specimen to that of a blackbody radiator at the same temperature. The equation relating εN to wavelength and spectral normal emittance [εN (λ)] is where: L b(λ, T) = Planck's blackbody radiation function = c1π −1λ−5(ec2/λT − 1)−1, c1 = 3.7415 × 10−16 W·m 2, c2 = 1.4388 × 10−2 m·K, T = absolute temperature, K, λ = wavelength, m, Lb(λ, T)dλ = Δπ −1T4, and Δ = Stefan-Boltzmann constant = 5.66961 × 10 −8 W·m2·K−4 1.2 These test methods are intended for measurements on large surfaces when rapid measurements must be made and where a nondestructive test is desired. They are particularly useful for production control tests. 1.3 The values stated in SI units are to be regarded as standard. No other units of measu...

  15. Large enhancement of thermoelectric effects in a tunneling-coupled parallel DQD-AB ring attached to one normal and one superconducting lead

    Science.gov (United States)

    Yao, Hui; Zhang, Chao; Li, Zhi-Jian; Nie, Yi-Hang; Niu, Peng-bin

    2018-05-01

    We theoretically investigate the thermoelectric properties in a tunneling-coupled parallel DQD-AB ring attached to one normal and one superconducting lead. The role of the intrinsic and extrinsic parameters in improving thermoelectric properties is discussed. The peak value of figure of merit near gap edges increases with the asymmetry parameter decreasing, particularly, when asymmetry parameter is less than 0.5, the figure of merit near gap edges rapidly rises. When the interdot coupling strengh is less than the superconducting gap the thermopower spectrum presents a single-platform structure. While when the interdot coupling strengh is larger than the gap, a double-platform structure appears in thermopower spectrum. Outside the gap the peak values of figure of merit might reach the order of 102. On the basis of optimizing internal parameters the thermoelectric conversion efficiency of the device can be further improved by appropriately matching the total magnetic flux and the flux difference between two subrings.

  16. Renal function maturation in children: is normalization to surface area valid?

    International Nuclear Information System (INIS)

    Rutland, M.D.; Hassan, I.M.; Que, L.

    1999-01-01

    Full text: Gamma camera DTPA renograms were analysed to measure renal function by the rate at which the kidneys took up tracer from the blood. This was expressed either directly as the fractional uptake rate (FUR), which is not related to body size, or it was converted to a camera-based GFR by the formula GFR blood volume x FUR, and this GFR was normalized to a body surface area of 1.73 m2. Most of the patients studied had one completely normal kidney, and one kidney with reflux but normal function and no large scars. The completely normal kidneys contributed, on average, 50% of the total renal function. The results were considered in age bands, to display the effect of age on renal function. The camera-GFR measurements showed the conventional results of poor renal function in early childhood, with a slow rise to near-adult values by the age of 2 years, and somewhat low values throughout childhood. The uptake values showed a different pattern, with renal function rising to adult equivalent values by the age of 4 months, and with children having better renal function than adults throughout most of their childhood. The standard deviations expressed as coefficients of variation (CV) were smaller for the FUR technique than the GFR (Wilcoxon rank test, P < 0.01). These results resemble recent published measurements of absolute DMSA uptake, which are also unrelated to body size and show early renal maturation. The results also suggest that the reason children have lower serum creatinine levels than adults is that they have better renal function. If this were confirmed, it would raise doubts about the usefulness of normalizing renal function to body surface area in children

  17. Entropy Generation Analysis of Open Parallel Microchannels Embedded Within a Permeable Continuous Moving Surface: Application to Magnetohydrodynamics (MHD

    Directory of Open Access Journals (Sweden)

    Mohammad H. Yazdi

    2011-12-01

    Full Text Available This paper presents a new design of open parallel microchannels embedded within a permeable continuous moving surface due to reduction of exergy losses in magnetohydrodynamic (MHD flow at a prescribed surface temperature (PST. The entropy generation number is formulated by an integral of the local rate of entropy generation along the width of the surface based on an equal number of microchannels and no-slip gaps interspersed between those microchannels. The velocity, the temperature, the velocity gradient and the temperature gradient adjacent to the wall are substituted into this equation resulting from the momentum and energy equations obtained numerically by an explicit Runge-Kutta (4, 5 formula, the Dormand-Prince pair and shooting method. The entropy generation number, as well as the Bejan number, for various values of the involved parameters of the problem are also presented and discussed in detail.

  18. Reducing Entropy Generation in MHD Fluid Flow over Open Parallel Microchannels Embedded in a Micropatterned Permeable Surface

    Directory of Open Access Journals (Sweden)

    Ishak Hashim

    2013-11-01

    Full Text Available The present study examines embedded open parallel microchannels within a micropatterned permeable surface for reducing entropy generation in MHD fluid flow in microscale systems. A local similarity solution for the transformed governing equations is obtained. The governing partial differential equations along with the boundary conditions are first cast into a dimensionless form and then the reduced ordinary differential equations are solved numerically via the Dormand-Prince pair and shooting method. The dimensionless entropy generation number is formulated by an integral of the local rate of entropy generation along the width of the surface based on an equal number of microchannels and no-slip gaps interspersed between those microchannels. Finally, the entropy generation numbers, as well as the Bejan number, are investigated. It is seen that surface-embedded microchannels can successfully reduce entropy generation in the presence of an applied magnetic field.

  19. Using parallel computing methods to improve log surface defect detection methods

    Science.gov (United States)

    R. Edward Thomas; Liya. Thomas

    2013-01-01

    Determining the size and location of surface defects is crucial to evaluating the potential yield and value of hardwood logs. Recently a surface defect detection algorithm was developed using the Java language. This algorithm was developed around an earlier laser scanning system that had poor resolution along the length of the log (15 scan lines per foot). A newer...

  20. Trajectories of cortical surface area and cortical volume maturation in normal brain development

    Directory of Open Access Journals (Sweden)

    Simon Ducharme

    2015-12-01

    Full Text Available This is a report of developmental trajectories of cortical surface area and cortical volume in the NIH MRI Study of Normal Brain Development. The quality-controlled sample included 384 individual typically-developing subjects with repeated scanning (1–3 per subject, total scans n=753 from 4.9 to 22.3 years of age. The best-fit model (cubic, quadratic, or first-order linear was identified at each vertex using mixed-effects models, with statistical correction for multiple comparisons using random field theory. Analyses were performed with and without controlling for total brain volume. These data are provided for reference and comparison with other databases. Further discussion and interpretation on cortical developmental trajectories can be found in the associated Ducharme et al.׳s article “Trajectories of cortical thickness maturation in normal brain development – the importance of quality control procedures” (Ducharme et al., 2015 [1].

  1. Internal structure of normal maize starch granules revealed by chemical surface gelatinization.

    Science.gov (United States)

    Pan, D D; Jane, J I

    2000-01-01

    Normal maize starch was fractionated into two sizes: large granules with diameters more than 5 microns and small granules with diameters less than 5 microns. The large granules were surface gelatinized by treating them with an aqueous LiCl solution (13 M) at 22-23 degrees C. Surface-gelatinized remaining granules were obtained by mechanical blending, and gelatinized surface starch was obtained by grinding with a mortar and a pestle. Starches of different granular sizes and radial locations, obtained after different degrees of surface gelatinization, were subjected to scanning electron microscopy, iodine potentiometric titration, gel-permeation chromatography, and amylopectin branch chain length analysis. Results showed that the remaining granules had a rough surface with a lamella structure. Amylose was more concentrated at the periphery than at the core of the granule. Amylopectin had longer long B-chains at the core than at the periphery of the granule. Greater proportions of the long B-chains were present at the core than at the periphery of the granule.

  2. Normal appearance of the prostate and seminal tract: MR imaging using an endorectal surface coil

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myeong Jin; Lee, Jong Tae; Lee, Moo Sang; Choi, Pil Sik; Hong, Sung Joon; Lee, Yeon Hee; Choi, Hak Yong [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    1994-06-15

    To assess the ability of MR imaging with an endorectal surface coil for the depiction of normal anatomical structure of prostate and its adjacent organs. MR imaging using an endorectal surface coil was performed in 23 male patients(age ; 20-75) to evaluate various prostatic and vasovesicular disorders, i. e, 14 cases of ejaculatory problems, 3 cases of hypogonadism, and 4 cases of prostatic cancers and 2 cases of benign prostatic hyperplasia. MR images were obtained with axial, sagittal and coronal fast spin echo long TR/TE images and axial spin echo short TR/TE images. Field of views was 10-12 cm and scan thickness was 3-5 mm. Depiction of normal anatomcial structures was excellent in all cases. On T2WI, zonal anatomy of the prostate and prostatic urethra, urethral crest, and ejaculatory duct were cleary visualized. On T1WI, periprostatic fat plane is more cleary visualized. On transverse images, periprostatic structures were well visualized on T1WI,and on T2WI, anterior fibromuscular stroma, transition zone and peripheral zone could be readily differentiated. Coronal images were more helpful in visualization of both central and peripheral zones. Vas deferens, ejaculatory duct and vermontanum were also more easily defined on these images. Sagittal images was helpful in the depiction of anterior fibromuscular stroma, central zone and peripheral zone with prostatic urethra and ejaculatory duct in a single plane. High resolution MR imaging with an endorectal surface coil can readily visualize the normal anatomy of the prostate and its related structures and may be useful in the evaluation of various diseases of prostate and vasvesicular system.

  3. On the discrepancy in measurement of Q using surface waves and normal modes

    Science.gov (United States)

    Meschede, M.; Romanowicz, B. A.

    2012-12-01

    We revisit the decade-old unsolved problem of why measurements of the quality factor (Q) for fundamental mode propagating Rayleigh waves differs by up to 20% from that measured using normal modes, in the frequency band where both approaches are possible. Surface wave measurements consistently yield lower Q values than modes. Since it is unclear which measurement is more accurate, this is currently a limitation on the resolution of 1D average Q profiles in the Earth, compounded by the fact that the measurement bias may not only affect the region of the spectrum where both methods are available but every Q measurement that is based upon one or the other of the mentioned techniques. We investigate the effect of elastic focussing and defocussing on long time series using a spectral element method that we have shown to be accurate enough for the relevant period ranges and the necessarily long time series. While previous investigations are based upon approximate methods that are only valid for smooth 3D models and weak heterogeneities, the SEM allows us to estimate the effect of more realistic distributions of heterogeneities on amplitude measurements, and therefore Q. Our investigations show a bias towards lower Q in the first arriving surface wave trains and a bias towards higher Q in later arrivals which could explain the mode surface-wave discrepancy. Heuristically this can be explained by the fact that energy that has been scattered off the great circle path is brought back into the great circle after multiple-orbits, leading to increased amplitude in late arrivals. Further we reinvestigate the effects of noise that predominantly influences the later part of the seismogram, the effect of post-processing as well as mode amplitude modulations that could potentially bias the measurements. We plan to present preliminary results on applying our insights to debias real data and reduce the error bounds on 1D Q models from normal modes and surface waves.

  4. Identification of surface species by vibrational normal mode analysis. A DFT study

    Science.gov (United States)

    Zhao, Zhi-Jian; Genest, Alexander; Rösch, Notker

    2017-10-01

    Infrared spectroscopy is an important experimental tool for identifying molecular species adsorbed on a metal surface that can be used in situ. Often vibrational modes in such IR spectra of surface species are assigned and identified by comparison with vibrational spectra of related (molecular) compounds of known structure, e. g., an organometallic cluster analogue. To check the validity of this strategy, we carried out a computational study where we compared the normal modes of three C2Hx species (x = 3, 4) in two types of systems, as adsorbates on the Pt(111) surface and as ligands in an organometallic cluster compound. The results of our DFT calculations reproduce the experimental observed frequencies with deviations of at most 50 cm-1. However, the frequencies of the C2Hx species in both types of systems have to be interpreted with due caution if the coordination mode is unknown. The comparative identification strategy works satisfactorily when the coordination mode of the molecular species (ethylidyne) is similar on the surface and in the metal cluster. However, large shifts are encountered when the molecular species (vinyl) exhibits different coordination modes on both types of substrates.

  5. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    Science.gov (United States)

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  6. Surface Reconstruction from Parallel Curves with Application to Parietal Bone Fracture Reconstruction.

    Directory of Open Access Journals (Sweden)

    Abdul Majeed

    Full Text Available Maxillofacial trauma are common, secondary to road traffic accident, sports injury, falls and require sophisticated radiological imaging to precisely diagnose. A direct surgical reconstruction is complex and require clinical expertise. Bio-modelling helps in reconstructing surface model from 2D contours. In this manuscript we have constructed the 3D surface using 2D Computerized Tomography (CT scan contours. The fracture part of the cranial vault are reconstructed using GC1 rational cubic Ball curve with three free parameters, later the 2D contours are flipped into 3D with equidistant z component. The constructed surface is represented by contours blending interpolant. At the end of this manuscript a case report of parietal bone fracture is also illustrated by employing this method with a Graphical User Interface (GUI illustration.

  7. A Case Study of a Hybrid Parallel 3D Surface Rendering Graphics Architecture

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik; Madsen, Jan; Pedersen, Steen

    1997-01-01

    This paper presents a case study in the design strategy used inbuilding a graphics computer, for drawing very complex 3Dgeometric surfaces. The goal is to build a PC based computer systemcapable of handling surfaces built from about 2 million triangles, andto be able to render a perspective view...... the clock frequency as well as the parallelismof the system. This paper focuses on the back-end graphics pipeline,which is responsible for rasterizing triangles.%with a practically linear increase in performance. A pure software implementation of the proposed architecture iscurrently able to process 300...

  8. Surface plasmon resonance biosensor for parallelized detection of protein biomarkers in diluted blood plasma

    Czech Academy of Sciences Publication Activity Database

    Piliarik, Marek; Bocková, Markéta; Homola, Jiří

    2010-01-01

    Roč. 26, č. 4 (2010), s. 1656-1661 ISSN 0956-5663 R&D Projects: GA AV ČR KAN200670701 Institutional research plan: CEZ:AV0Z20670512 Keywords : Surface plasmon resonance * Protein array * Cancer marker Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 5.361, year: 2010

  9. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2012-01-01

    According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  10. Lateral conductance parallel to membrane surfaces: effects of anesthetics and electrolytes at pre-transition.

    Science.gov (United States)

    Yoshino, A; Yoshida, T; Okabayashi, H; Kamaya, H; Ueda, I

    1992-06-11

    The effects of dilute salts and anesthetics were studied on the impedance dispersion in the dipalmitoylphosphatidylcholine (DPPC) liposomes. Below the pre-transition temperature, the apparent activation energy for conductance in DPPC-H2O without salts was equivalent to pure water, 18.2 kJ mol-1. This suggests that the mobile ions (H3O+ and OH-) interact negligibly with the lipid surface below the pre-transition temperature. At pre-transition temperature, the apparent activation energy of the conductance decreased by the increase in the DPPC concentrations. The effects of various salts (LiCl, NaCl, KCl, KBr, and KI) on the apparent activation energy of the conductance were studied. Changes in anions, but not in cations, affected the activation energy. The order of the effect was Cl- less than Br- less than I-. Cations appear to be highly immobilized by hydrogen bonding to the phosphate moiety of DPPC. The smaller the ionic radius, the more ions are fixed on the surface at the expense of the free-moving species. The apparent activation energy of the transfer of ions at the vesicle surface was estimated from the temperature-dependence of the dielectric constant, and was 61.0 kJ mol-1 in the absence of electrolytes. In the presence of electrolytes, the order of the activation energy was F- greater than Cl- greater than Br- greater than I-. When the ionic radius is smaller, these anions interact with the hydration layer at the vesicle surface and the ionic transfer may become sluggish. In the absence of electrolytes, the apparent activation energy of the dielectric constant decreased by the increase in halothane concentrations. In the presence of electrolytes, however, the addition of halothane increased the apparent activation energy. We propose that the adsorption of halothane on the vesicle surface produces two effects: (1) destruction of the hydration shell, and (2) increase in the binding of electrolytes to the vesicle surface. In the absence of electrolytes, the

  11. Surface micromachined MEMS deformable mirror based on hexagonal parallel-plate electrostatic actuator

    Science.gov (United States)

    Ma, Wenying; Ma, Changwei; Wang, Weimin

    2018-03-01

    Deformable mirrors (DM) based on microelectromechanical system (MEMS) technology are being applied in adaptive optics (AO) system for astronomical telescopes and human eyes more and more. In this paper a MEMS DM with hexagonal actuator is proposed and designed. The relationship between structural design and performance parameters, mainly actuator coupling, is analyzed carefully and calculated. The optimum value of actuator coupling is obtained. A 7-element DM prototype is fabricated using a commercial available standard three-layer polysilicon surface multi-user-MEMS-processes (PolyMUMPs). Some key performances, including surface figure and voltage-displacement curve, are measured through a 3D white light profiler. The measured performances are very consistent with the theoretical values. The proposed DM will benefit the miniaturization of AO systems and lower their cost.

  12. Parallel Study of HEND, RAD, and DAN Instrument Response to Martian Radiation and Surface Conditions

    Science.gov (United States)

    Martiniez Sierra, Luz Maria; Jun, Insoo; Litvak, Maxim; Sanin, Anton; Mitrofanov, Igor; Zeitlin, Cary

    2015-01-01

    Nuclear detection methods are being used to understand the radiation environment at Mars. JPL (Jet Propulsion Laboratory) assets on Mars include: Orbiter -2001 Mars Odyssey [High Energy Neutron Detector (HEND)]; Mars Science Laboratory Rover -Curiosity [(Radiation Assessment Detector (RAD); Dynamic Albedo Neutron (DAN))]. Spacecraft have instruments able to detect ionizing and non-ionizing radiation. Instrument response on orbit and on the surface of Mars to space weather and local conditions [is discussed] - Data available at NASA-PDS (Planetary Data System).

  13. The normalization of surface anisotropy effects present in SEVIRI reflectances by using the MODIS BRDF method

    DEFF Research Database (Denmark)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI......) aboard the geostationary Meteosat Second Generation (MSG) satellites. We present early and provisional daily nadir BRDF-adjusted reflectance (NBAR) data in the visible and near-infrared MSG channels. These utilize the high temporal resolution of MSG to produce BRDF retrievals with a greatly reduced...... acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008...

  14. Parallel comparative studies on toxicity of quantum dots synthesized and surface engineered with different methods in vitro and in vivo

    Directory of Open Access Journals (Sweden)

    Liu F

    2017-07-01

    Full Text Available Fengjun Liu1,* Wen Ye1,* Jun Wang2 Fengxiang Song1 Yingsheng Cheng3 Bingbo Zhang21Department of Radiology, Shanghai Public Health Clinical Center, 2Institute of Photomedicine, Shanghai Skin Disease Hospital, The Institute for Biomedical Engineering & Nano Science, Tongji University School of Medicine, 3Department of Radiology, Shanghai Sixth People’s Hospital, Shanghai Jiao Tong University, Shanghai, China *These authors contributed equally to this work Abstract: Quantum dots (QDs have been considered to be promising probes for biosensing, bioimaging, and diagnosis. However, their toxicity issues caused by heavy metals in QDs remain to be addressed, in particular for their in vivo biomedical applications. In this study, a parallel comparative investigation in vitro and in vivo is presented to disclose the impact of synthetic methods and their following surface modifications on the toxicity of QDs. Cellular assays after exposure to QDs were conducted including cell viability assessment, DNA breakage study in a single cellular level, intracellular reactive oxygen species (ROS receptor measurement, and transmission electron microscopy to evaluate their toxicity in vitro. Mice experiments after QD administration, including analysis of hemobiological indices, pharmacokinetics, histological examination, and body weight, were further carried out to evaluate their systematic toxicity in vivo. Results show that QDs fabricated by the thermal decomposition approach in organic phase and encapsulated by an amphiphilic polymer (denoted as QDs-1 present the least toxicity in acute damage, compared with those of QDs surface engineered by glutathione-mediated ligand exchange (denoted as QDs-2, and the ones prepared by coprecipitation approach in aqueous phase with mercaptopropionic acid capped (denoted as QDs-3. With the extension of the investigation time of mice respectively injected with QDs, we found that the damage caused by QDs to the organs can be

  15. Body surface area in normal-weight, overweight, and obese adults. A comparison study.

    Science.gov (United States)

    Verbraecken, Johan; Van de Heyning, Paul; De Backer, Wilfried; Van Gaal, Luc

    2006-04-01

    Values for body surface area (BSA) are commonly used in medicine, particularly to calculate doses of chemotherapeutic agents and index cardiac output. Various BSA formulas have been developed over the years. The DuBois and DuBois (Arch Intern Med 1916;17:863-71) BSA equation is the most widely used, although derived from only 9 subjects. More recently, Mosteller (N Engl J Med 1987;317:1098) produced a simple formula, [weight (kg) x height (cm)/3600](1/2), which could be easily remembered and evaluated on a pocket calculator, but validation data in adults are rare. The purpose of the present study was to examine the BSA based on Mosteller's formula in normal-weight (body mass index [BMI], 20-24.9 kg/m(2)), overweight (BMI, 25-29.9 kg/m(2)), and obese (BMI, >/=30 kg/m(2)) adults (>18 years old) in comparison with other empirically derived formulas (DuBois and DuBois, Boyd [The growth of the surface area of the human body. Minneapolis: University of Minnesota Press; 1935], Gehan and George [Cancer Chemother Rep 1970;54:225-35], US Environmental Protection Agency [Development of statistical distributions or ranges of standard factors used in exposure assessments Washington, EPA/600/8-85-010. Office of Health and Environmental Assessment; 1985), Haycock et al [J Pediatr 1978;93:62-6], Mattar [Crit Care Med 1989;17:846-7], Livingston and Scott [Am J Physiol Endocrinol Metab 2001;281:E586-91]) and with the new 3-dimensional-derived formula of Yu et al (Appl Ergon. 2003;34:273-8). One thousand eight hundred sixty-eight patients were evaluated (397 normal weight [BMI, 23 +/- 1 kg/m(2); age, 50 +/- 14 years; M/F, 289/108], 714 overweight [BMI, 27 +/- 1 kg/m(2); age, 52 +/- 11 years; M/F, 594/120], and 757 obese [BMI, 36 +/- 6 kg/m(2); age, 53 +/- 11 years; M/F, 543/215]). The overall BSA was 2.04 +/- 0.24 m(2): 1.81 +/- 0.19 m(2) in normal-weight, 1.99 +/- 0.16 m(2) in overweight, and 2.21 +/- 0.22 m(2) in obese subjects. These values were significantly higher in overweight

  16. Controlled parallel crystallization of lithium disilicate and diopside using a combination of internal and surface nucleation

    Directory of Open Access Journals (Sweden)

    Markus Rampf

    2016-10-01

    Full Text Available In the mid-19th century, Dr. Donald Stookey identified the importance and usability of nucleating agents and mechanisms for the development of glass-ceramic materials. Today, a number of various internal and surface mechanisms as well as combinations thereof have been established in the production of glass-ceramic materials. In order to create new innovative material properties the present study focuses on the precipitation of CaMgSiO6 as a minor phase in Li2Si2O5 based glass-ceramics. In the base glass of the SiO2-Li2O-P2O5-Al2O3-K2O-MgO-CaO system P2O5 serves as nucleating agent for the internal precipitation of Li2Si2O5 crystals while a mechanical activation of the glass surface by means of ball milling is necessary to nucleate the minor CaMgSi2O6 crystal phase. For a successful precipitation of CaMgSi2O6 a minimum ratio of MgO and CaO in the range between 1.4 mol% and 2.9 mol% in the base glasses was determined. The nucleation and crystallization of both crystal phases takes place during sintering a powder compact. Dependent on the quality of the sintering process the dense Li2Si2O5-CaMgSi2O6 glass-ceramics show a mean biaxial strength of up to 392 ± 98 MPa. The microstructure of the glass-ceramics is formed by large (5-10 µm bar like CaMgSi2O6 crystals randomly embedded in a matrix of small (≤ 0.5 µm plate like Li2Si2O5 crystals arranged in an interlocking manner. While there is no significant influence of the minor CaMgSi2O6 phase on the strength of the material, the translucency of the material decreases upon precipitation of the minor phase.

  17. High Quality Superconductor–Normal Metal Junction Made on the Surface of MoS2 Flakes

    NARCIS (Netherlands)

    Chen, Qihong; Liang, Lei; Ali El Yumin, Abdurrahman; Lu, Jianming; Zheliuk, Oleksandr; Ye, Jianting

    2017-01-01

    A superconductor–normal metal (SN) junction is fabricated on the surface of a few-layer MoS2 flake. Superconductivity is induced by ionic liquid gating, and an h-BN flake is used to locally separate ionic liquid from the surface of MoS2. The h-BN covered channel remains semiconducting, therefore an

  18. Comparing the Effects of Particulate Matter on the Ocular Surfaces of Normal Eyes and a Dry Eye Rat Model.

    Science.gov (United States)

    Han, Ji Yun; Kang, Boram; Eom, Youngsub; Kim, Hyo Myung; Song, Jong Suk

    2017-05-01

    To compare the effect of exposure to particulate matter on the ocular surface of normal and experimental dry eye (EDE) rat models. Titanium dioxide (TiO2) nanoparticles were used as the particulate matter. Rats were divided into 4 groups: normal control group, TiO2 challenge group of the normal model, EDE control group, and TiO2 challenge group of the EDE model. After 24 hours, corneal clarity was compared and tear samples were collected for quantification of lactate dehydrogenase, MUC5AC, and tumor necrosis factor-α concentrations. The periorbital tissues were used to evaluate the inflammatory cell infiltration and detect apoptotic cells. The corneal clarity score was greater in the EDE model than in the normal model. The score increased after TiO2 challenge in each group compared with each control group (normal control vs. TiO2 challenge group, 0.0 ± 0.0 vs. 0.8 ± 0.6, P = 0.024; EDE control vs. TiO2 challenge group, 2.2 ± 0.6 vs. 3.8 ± 0.4, P = 0.026). The tear lactate dehydrogenase level and inflammatory cell infiltration on the ocular surface were higher in the EDE model than in the normal model. These measurements increased significantly in both normal and EDE models after TiO2 challenge. The tumor necrosis factor-α levels and terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling-positive cells were also higher in the EDE model than in the normal model. TiO2 nanoparticle exposure on the ocular surface had a more prominent effect in the EDE model than it did in the normal model. The ocular surface of dry eyes seems to be more vulnerable to fine dust of air pollution than that of normal eyes.

  19. Research surface resistance of copper normal and abnormal skin-effects depending on the frequency of electromagnetic field

    International Nuclear Information System (INIS)

    Kutovyi, V.A.; Komir, A.I.

    2013-01-01

    The results of the frequency dependence of surface resistance of copper in diffuse and specular reflection of electrons from the conductive surface of the high-frequency resonance of the system depending on the frequency of the electromagnetic field in the normal and anomalous skin effect. Found, the surface resistance of copper is reduced by more than 10 times at the temperature of liquid helium, as compared with a surface resistivity at room temperature, at frequencies f ≤ 173 MHz, for diffuse reflection of conduction electrons from the surface of the conductive layer, and the specular reflection - at frequencies f ≤ 346 MHz

  20. Normal motor nerve conduction studies using surface electrode recording from the supraspinatus, infraspinatus, deltoid, and biceps.

    Science.gov (United States)

    Buschbacher, Ralph Michael; Weir, Susan Karolyi; Bentley, John Greg; Cottrell, Erika

    2009-02-01

    Proximal peripheral nerve conduction studies can provide useful information to the clinician. The difficulty of measuring the length of the proximal nerve as well as a frequent inability to stimulate at 2 points along the nerve adds a challenge to the use of electrodiagnosis for this purpose. The purpose of this article is to present normal values for the suprascapular, axillary, and musculocutaneous nerves using surface electrodes while accounting for side-to-side variability. Prospective, observational study. Patients were evaluated in outpatient, private practices affiliated with tertiary care systems in the United States and Malaysia. One hundred volunteers were recruited and completed bilateral testing. Exclusion criteria included age younger than 18 years; previous shoulder surgery/atrophy; symptoms of numbness, tingling, or abnormal sensations in the upper extremity; peripheral neuropathy; or presence of a cardiac pacemaker. Nerve conduction studies to bilateral supraspinatus, infraspinatus, deltoid, and biceps brachii muscles were performed with documented technique. Distal latency, amplitude, and area were recorded. Side-to-side comparisons were made. A mixed linear model was fit to the independent variables of gender, race, body mass index, height, and age with each recorded value. Distal latency, amplitude, area, and side-to-side variability of nerve conduction studies of the suprascapular, axillary, and musculocutaneous nerves with correlation to significant independent variables. Data are presented showing normal distal latency, amplitude, and area values subcategorized by clinically significant variables, as well as acceptable side-to-side variability. Increased height correlated with increased distal latency in all the nerves tested. Amplitudes were larger in the infraspinatus recordings from women, while the amplitudes from the biceps and deltoid were greater in men. A larger body mass index was associated with a smaller amplitude in the deltoid in

  1. A fast and efficient adaptive parallel ray tracing based model for thermally coupled surface radiation in casting and heat treatment processes

    Science.gov (United States)

    Fainberg, J.; Schaefer, W.

    2015-06-01

    A new algorithm for heat exchange between thermally coupled diffusely radiating interfaces is presented, which can be applied for closed and half open transparent radiating cavities. Interfaces between opaque and transparent materials are automatically detected and subdivided into elementary radiation surfaces named tiles. Contrary to the classical view factor method, the fixed unit sphere area subdivision oriented along the normal tile direction is projected onto the surrounding radiation mesh and not vice versa. Then, the total incident radiating flux of the receiver is approximated as a direct sum of radiation intensities of representative “senders” with the same weight factor. A hierarchical scheme for the space angle subdivision is selected in order to minimize the total memory and the computational demands during thermal calculations. Direct visibility is tested by means of a voxel-based ray tracing method accelerated by means of the anisotropic Chebyshev distance method, which reuses the computational grid as a Chebyshev one. The ray tracing algorithm is fully parallelized using MPI and takes advantage of the balanced distribution of all available tiles among all CPU's. This approach allows tracing of each particular ray without any communication. The algorithm has been implemented in a commercial casting process simulation software. The accuracy and computational performance of the new radiation model for heat treatment, investment and ingot casting applications is illustrated using industrial examples.

  2. Normal incidence sound transmission loss evaluation by upstream surface impedance measurements.

    Science.gov (United States)

    Panneton, Raymond

    2009-03-01

    A method is developed to obtain the normal incidence sound transmission loss of noise control elements used in piping systems from upstream surface impedance measurements only. The noise control element may be a small material specimen in an impedance tube, a sealing part in an automotive hollow body network, an expansion chamber, a resonator, or a muffler. The developments are based on a transfer matrix (four-pole) representation of the noise control element and on the assumption that only plane waves propagate upstream and downstream the element. No assumptions are made on its boundary conditions, dimensions, shape, and material properties (i.e., the element may be symmetrical or not along its thickness, homogeneous or not, isotropic or not). One-load and two-load procedures are also proposed to identify the transfer matrix coefficients needed to obtain the true transmission loss of the tested element. The method can be used with a classical two-microphone impedance tube setup (i.e., no additional downstream tube and downstream acoustical measurements). The method is tested on three different noise control elements: two impedance tube multilayered specimens and one expansion chamber. The results found using the developed method are validated using numerical simulations.

  3. The Parallel SBAS-DInSAR algorithm: an effective and scalable tool for Earth's surface displacement retrieval

    Science.gov (United States)

    Zinno, Ivana; De Luca, Claudio; Elefante, Stefano; Imperatore, Pasquale; Manunta, Michele; Casu, Francesco

    2014-05-01

    been carried out on real data acquired by ENVISAT and COSMO-SkyMed sensors. Moreover, the P-SBAS performances with respect to the size of the input dataset will also be investigated. This kind of analysis is essential for assessing the goodness of the P-SBAS algorithm and gaining insight into its applicability to different scenarios. Besides, such results will also become crucial to identify and evaluate how to appropriately exploit P-SBAS to process the forthcoming large Sentinel-1 data stream. References [1] Massonnet, D., Briole, P., Arnaud, A., "Deflation of Mount Etna monitored by Spaceborne Radar Interferometry", Nature, vol. 375, pp. 567-570, 1995. [2] Berardino, P., G. Fornaro, R. Lanari, and E. Sansosti, "A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms", IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002. [3] Elefante, S., Imperatore, P. , Zinno, I., M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, F. Casu, "SBAS-DINSAR Time series generation on cloud computing platforms", IEEE IGARSS 2013, July 2013, Melbourne (AU). [4] Zinno, P. Imperatore, S. Elefante, F. Casu, M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, "A Novel Parallel Computational Framework for Processing Large INSAR Data Sets", Living Planet Symposium 2013, Sept. 9-13, 2013.

  4. Amount and surface structure of albumin adsorbed to solid substrata with different wettabilities in a parallel plate flow cell.

    Science.gov (United States)

    Uyen, H M; Schakenraad, J M; Sjollema, J; Noordmans, J; Jongebloed, W L; Stokroos, I; Busscher, H J

    1990-12-01

    In this article we studied the adsorption of serum albumin to substrata with a broad range of wettabilities from solutions with protein concentrations between 0.03 and 3.00 mg.mL-1 in a parallel-plate flow cell. Wall shear rates were varied between 20 and 2000 s-1. The amount of albumin adsorbed in a stationary state was always highest on PTFE, the most hydrophobic material employed and decreased with increasing wettability of the substrata. Increasing stationary amounts of adsorbed albumin were observed with increasing wall shear rates at the lowest protein concentration. Inverse observations were made at the highest protein concentration. Transmission electron micrographs of replicas from the albumin-coated substrata showed that proteins were mostly adsorbed in islandlike structures on the hydrophobic substrata. The tendency to form islandlike structures was shear rate- and concentration-dependent and disappeared gradually going to more hydrophilic substrata. On glass, the most hydrophilic material employed, a homogeneous, well distributed, fine knotted, reticulated structure was found. In conclusion, this study demonstrates that both the amount of adsorbed albumin as well as the surface structure of the adsorbed proteins are regulated by the substratum wettability. This observation may well account for the fact that substratum properties can be transferred by an adsorbed protein film to the interface with adhering cells or microorganisms.

  5. Attractor hopping between polarization dynamical states in a vertical-cavity surface-emitting laser subject to parallel optical injection

    Science.gov (United States)

    Denis-le Coarer, Florian; Quirce, Ana; Valle, Angel; Pesquera, Luis; Rodríguez, Miguel A.; Panajotov, Krassimir; Sciamanna, Marc

    2018-03-01

    We present experimental and theoretical results of noise-induced attractor hopping between dynamical states found in a single transverse mode vertical-cavity surface-emitting laser (VCSEL) subject to parallel optical injection. These transitions involve dynamical states with different polarizations of the light emitted by the VCSEL. We report an experimental map identifying, in the injected power-frequency detuning plane, regions where attractor hopping between two, or even three, different states occur. The transition between these behaviors is characterized by using residence time distributions. We find multistability regions that are characterized by heavy-tailed residence time distributions. These distributions are characterized by a -1.83 ±0.17 power law. Between these regions we find coherence enhancement of noise-induced attractor hopping in which transitions between states occur regularly. Simulation results show that frequency detuning variations and spontaneous emission noise play a role in causing switching between attractors. We also find attractor hopping between chaotic states with different polarization properties. In this case, simulation results show that spontaneous emission noise inherent to the VCSEL is enough to induce this hopping.

  6. Study of MPI based on parallel MOM on PC clusters for EM-beam scattering by 2-D PEC rough surfaces

    International Nuclear Information System (INIS)

    Jun, Ma; Li-Xin, Guo; An-Qi, Wang

    2009-01-01

    This paper firstly applies the finite impulse response filter (FIR) theory combined with the fast Fourier transform (FFT) method to generate two-dimensional Gaussian rough surface. Using the electric field integral equation (EFIE), it introduces the method of moment (MOM) with RWG vector basis function and Galerkin's method to investigate the electromagnetic beam scattering by a two-dimensional PEC Gaussian rough surface on personal computer (PC) clusters. The details of the parallel conjugate gradient method (CGM) for solving the matrix equation are also presented and the numerical simulations are obtained through the message passing interface (MPI) platform on the PC clusters. It finds significantly that the parallel MOM supplies a novel technique for solving a two-dimensional rough surface electromagnetic-scattering problem. The influences of the root-mean-square height, the correlation length and the polarization on the beam scattering characteristics by two-dimensional PEC Gaussian rough surfaces are finally discussed. (classical areas of phenomenology)

  7. Air bubble-induced detachment of positively and negatively charged polystyrene particles from collector surfaces in a parallel-plate flow chamber

    NARCIS (Netherlands)

    Gomez-Suarez, C; Van der Mei, HC; Busscher, HJ

    2000-01-01

    Electrostatic interactions between colloidal particles and collector surfaces were found tcr be important in particle detachment as induced by the passage of air bubbles in a parallel-plate Row chamber. Electrostatic interactions between adhering particles and passing air bubbles, however, a-ere

  8. Detachment of colloidal particles from collector surfaces with different electrostatic charge and hydrophobicity by attachment to air bubbles in a parallel plate flow chamber

    NARCIS (Netherlands)

    Suarez, CG; van der Mei, HC; Busscher, HJ

    1999-01-01

    The detachment of polystyrene particles adhering to collector surfaces with different electrostatic charge and hydrophobicity by attachment to a passing air bubble has been studied in a parallel plate flow chamber. Particle detachment decreased linearly with increasing air bubble velocity and

  9. The surface diffusion coefficient for an arbitrarily curved fluidfluid interface.(II). Coefficient for plane-parallel diffusion

    NARCIS (Netherlands)

    Sagis, L.M.C.

    2001-01-01

    In this paper we developed an expression for the coefficient for plane-parallel diffusion for an arbitrarily curved fluid–fluid interface. The expression is valid for ordinary diffusion in binary mixtures, with isotropic bulk phases and an interfacial region that is isotropic in the plane parallel

  10. Cell and fiber attachment to demineralized dentin. A comparison between normal and periodontitis-affected root surfaces.

    Science.gov (United States)

    Polson, A M; Hanes, P J

    1987-07-01

    The purpose of the present study was to compare and contrast cellular, connective tissue, and epithelial responses to dentin specimens derived from the roots of either normal or periodontitis-affected human teeth after surface demineralization. Rectangular dentin specimens, with opposite faces of root and pulpal dentin, were derived from beneath root surfaces covered by periodontal ligament (normal) or calculus-covered areas of periodontitis-affected teeth. In each of the groups, the specimens were treated with citric acid (pH 1 for 3 min), whereupon they were implanted transcutaneously into incisional wounds on the dorsal surface of rats with one end of the implant protruding through the skin. 4 specimens were available in each group at 10 days after implantation. Histologic and histometric analyses of the root surfaces of the implants included counts of adhering cells, evaluation of connective tissue fiber relationships, and assessment of epithelial migration. New connective tissue attachment with inhibition of epithelial migration occurred in both groups. Cementum formation was not present. Comparisons between the groups showed no significant differences regarding length of implant surface adjacent to connective tissue, number of attached cells, or density and diameter of attached fibers. The fiber attachment system which had developed on these demineralized surfaces seemed intrinsic to the connective tissue location, and differed morphologically from corresponding fibers attaching the root surface in a normal periodontium. It was concluded that there were no observable differences between the new connective tissue attachment systems which developed on demineralized dentin from either normal or periodontitis-affected root surfaces.

  11. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  12. MRI of the shoulder joint with surface coils at 1. 5 Tesla. Normal anatomy and possible clinical application

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, D.; Steinbrich, W.; Krestin, G.; Koebke, J.; Kummer, B.; Bunke, J.

    1987-03-01

    High spatial resolution magnetic resonance images of the shoulder were obtained in axial, sagittal and coronal orientations using a 1.5 T imaging system and anatomically shaped, wrap-around surface coils. Variations in scapular position induced by patient positioning change the relationship of the planes to the shoulder anatomy and make reproducibility of sagittal and coronal planes difficult. We, therefore, use - after axial orientation - image-oblique planes perpendicular and parallel to the glenoid fossa. In this manner MRI can visualise the anatomic structures of the shoulder including rotator cuff, long biceps tendon, articular capsule, articular cartilage, muscles and bones due to the high soft tissue contrast of MRI.

  13. Effects of vegetation types on soil moisture estimation from the normalized land surface temperature versus vegetation index space

    Science.gov (United States)

    Zhang, Dianjun; Zhou, Guoqing

    2015-12-01

    Soil moisture (SM) is a key variable that has been widely used in many environmental studies. Land surface temperature versus vegetation index (LST-VI) space becomes a common way to estimate SM in optical remote sensing applications. Normalized LST-VI space is established by the normalized LST and VI to obtain the comparable SM in Zhang et al. (Validation of a practical normalized soil moisture model with in situ measurements in humid and semiarid regions [J]. International Journal of Remote Sensing, DOI: 10.1080/01431161.2015.1055610). The boundary conditions in the study were set to limit the point A (the driest bare soil) and B (the wettest bare soil) for surface energy closure. However, no limitation was installed for point D (the full vegetation cover). In this paper, many vegetation types are simulated by the land surface model - Noah LSM 3.2 to analyze the effects on soil moisture estimation, such as crop, grass and mixed forest. The locations of point D are changed with vegetation types. The normalized LST of point D for forest is much lower than crop and grass. The location of point D is basically unchanged for crop and grass.

  14. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  15. PHOENIX MARS SURFACE STEREO IMAGER 5 NORMAL OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  16. DEVELOPMENT AND USE OF A PARALLEL-PLATE FLOW CHAMBER FOR STUDYING CELLULAR ADHESION TO SOLID-SURFACES

    NARCIS (Netherlands)

    VANKOOTEN, TG; SCHAKENRAAD, JM; VANDERMEI, HC; BUSSCHER, HJ

    A parallel-plate flow chamber is developed in order to study cellular adhesion phenomena. An image analysis system is used to observe individual cells exposed to flow in situ and to determine area, perimeter, and shape of these cells as a function of time and shear stress. With this flow system the

  17. Surface profiling of normally responding and nonreleasing basophils by flow cytometry

    DEFF Research Database (Denmark)

    Kistrup, Kasper; Poulsen, Lars Kærgaard; Jensen, Bettina Margrethe

    a maximum release blood mononuclear cells were purified by density centrifugation and using flow cytometry, basophils, defined as FceRIa+CD3-CD14-CD19-CD56-,were analysed for surface expression of relevant markers. All samples were compensated and analysed in logicle display. All gates......c, C3aR, C5aR CCR3, FPR1, ST2, CRTH2 on anti-IgE respondsive and nonreleasing basophils by flow cytometry, thereby generating a surface profile of the two phenotypes. Methods Fresh buffy coat blood (

  18. IDENTIFYING RECENT SURFACE MINING ACTIVITIES USING A NORMALIZED DIFFERENCE VEGETATION INDEX (NDVI) CHANGE DETECTION METHOD

    Science.gov (United States)

    Coal mining is a major resource extraction activity on the Appalachian Mountains. The increased size and frequency of a specific type of surface mining, known as mountain top removal-valley fill, has in recent years raised various environmental concerns. During mountainto...

  19. Navier-Stokes Computations of a Wing-Flap Model With Blowing Normal to the Flap Surface

    Science.gov (United States)

    Boyd, D. Douglas, Jr.

    2005-01-01

    A computational study of a generic wing with a half span flap shows the mean flow effects of several blown flap configurations. The effort compares and contrasts the thin-layer, Reynolds averaged, Navier-Stokes solutions of a baseline wing-flap configuration with configurations that have blowing normal to the flap surface through small slits near the flap side edge. Vorticity contours reveal a dual vortex structure at the flap side edge for all cases. The dual vortex merges into a single vortex at approximately the mid-flap chord location. Upper surface blowing reduces the strength of the merged vortex and moves the vortex away from the upper edge. Lower surface blowing thickens the lower shear layer and weakens the merged vortex, but not as much as upper surface blowing. Side surface blowing forces the lower surface vortex farther outboard of the flap edge by effectively increasing the aerodynamic span of the flap. It is seen that there is no global aerodynamic penalty or benefit from the particular blowing configurations examined.

  20. Normal Tissue Complication Probability (NTCP) Modelling of Severe Acute Mucositis using a Novel Oral Mucosal Surface Organ at Risk.

    Science.gov (United States)

    Dean, J A; Welsh, L C; Wong, K H; Aleksic, A; Dunne, E; Islam, M R; Patel, A; Patel, P; Petkar, I; Phillips, I; Sham, J; Schick, U; Newbold, K L; Bhide, S A; Harrington, K J; Nutting, C M; Gulliford, S L

    2017-04-01

    A normal tissue complication probability (NTCP) model of severe acute mucositis would be highly useful to guide clinical decision making and inform radiotherapy planning. We aimed to improve upon our previous model by using a novel oral mucosal surface organ at risk (OAR) in place of an oral cavity OAR. Predictive models of severe acute mucositis were generated using radiotherapy dose to the oral cavity OAR or mucosal surface OAR and clinical data. Penalised logistic regression and random forest classification (RFC) models were generated for both OARs and compared. Internal validation was carried out with 100-iteration stratified shuffle split cross-validation, using multiple metrics to assess different aspects of model performance. Associations between treatment covariates and severe mucositis were explored using RFC feature importance. Penalised logistic regression and RFC models using the oral cavity OAR performed at least as well as the models using mucosal surface OAR. Associations between dose metrics and severe mucositis were similar between the mucosal surface and oral cavity models. The volumes of oral cavity or mucosal surface receiving intermediate and high doses were most strongly associated with severe mucositis. The simpler oral cavity OAR should be preferred over the mucosal surface OAR for NTCP modelling of severe mucositis. We recommend minimising the volume of mucosa receiving intermediate and high doses, where possible. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  1. Surface and protein analyses of normal human cell attachment on PIII-modified chitosan membranes

    Energy Technology Data Exchange (ETDEWEB)

    Saranwong, N. [Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Inthanon, K. [Human and Animal Cell Technology Research Laboratory, Department of Biology, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Wongkham, W., E-mail: weerah@chiangmai.ac.th [Human and Animal Cell Technology Research Laboratory, Department of Biology, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Wanichapichart, P. [Nanotechnology Center of Excellence and Membrane Science and Technology Research Center, Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkla 90110 (Thailand); Suwannakachorn, D. [Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Yu, L.D., E-mail: yuld@fnrf.science.cmu.ac.th [Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Thailand Center of Excellence in Physics, Commission on Higher Education, 328 Si Ayutthaya Road, Bangkok 10400 (Thailand)

    2012-02-01

    Surface of chitosan membrane was modified with argon (Ar) and nitrogen (N) plasma immersion ion implantation (PIII) for human skin fibroblasts F1544 cell attachment. The modified surfaces were characterized by Fourier transform infrared spectroscopy (FTIR) and atomic force microscopy (AFM). Cell attachment patterns were evaluated by scanning electron microscopy (SEM). The enzyme-linked immunosorbent assay (ELISA) was used to quantify levels of focal adhesion kinase (FAK). The results showed that Ar PIII had an enhancement effect on the cell attachment while N-PIII had an inhibition effect. Filopodial analysis revealed more microfilament cytoplasmic spreading on the edge of cells attached on the Ar-treated membranes than N-treated membranes. Higher level FAK was found in Ar-treated membranes than that in N-treated membranes.

  2. Normal loads program for aerodynamic lifting surface theory. [evaluation of spanwise and chordwise loading distributions

    Science.gov (United States)

    Medan, R. T.; Ray, K. S.

    1974-01-01

    A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.

  3. How Can Polarization States of Reflected Light from Snow Surfaces Inform Us on Surface Normals and Ultimately Snow Grain Size Measurements?

    Science.gov (United States)

    Schneider, A. M.; Flanner, M.; Yang, P.; Yi, B.; Huang, X.; Feldman, D.

    2016-12-01

    The Snow Grain Size and Pollution (SGSP) algorithm is a method applied to Moderate Resolution Imaging Spectroradiometer data to estimate snow grain size from space-borne measurements. Previous studies validate and quantify potential sources of error in this method, but because it assumes flat snow surfaces, however, large scale variations in surface normals can cause biases in its estimates due to its dependence on solar and observation zenith angles. To address these variations, we apply the Monte Carlo method for photon transport using data containing the single scattering properties of different ice crystals to calculate polarization states of reflected monochromatic light at 1500nm from modeled snow surfaces. We evaluate the dependence of these polarization states on solar and observation geometry at 1500nm because multiple scattering is generally a mechanism for depolarization and the ice crystals are relatively absorptive at this wavelength. Using 1500nm thus results in a higher number of reflected photons undergoing fewer scattering events, increasing the likelihood of reflected light having higher degrees of polarization. In evaluating the validity of the model, we find agreement with previous studies pertaining to near-infrared spectral directional hemispherical reflectance (i.e. black-sky albedo) and similarities in measured bidirectional reflectance factors, but few studies exist modeling polarization states of reflected light from snow surfaces. Here, we present novel results pertaining to calculated polarization states and compare dependences on solar and observation geometry for different idealized snow surfaces. If these dependencies are consistent across different ice particle shapes and sizes, then these findings could inform the SGSP algorithm by providing useful relationships between measurable physical quantities and solar and observation geometry to better understand variations in snow surface normals from remote sensing observations.

  4. Surface relaxations as a tool to distinguish the dynamic interfacial properties of films formed by normal and diseased meibomian lipids.

    Science.gov (United States)

    Georgiev, Georgi As; Yokoi, Norihiko; Ivanova, Slavyana; Tonchev, Vesselin; Nencheva, Yana; Krastev, Rumen

    2014-08-14

    The surface properties of human meibomian lipids (MGS), the major constituent of the tear film (TF) lipid layer, are of key importance for TF stability. The dynamic interfacial properties of films by MGS from normal eyes (nMGS) and eyes with meibomian gland dysfunction (dMGS) were studied using a Langmuir surface balance. The behavior of the samples during dynamic area changes was evaluated by surface pressure-area isotherms and isocycles. The surface dilatational rheology of the films was examined in the frequency range 10(-5) to 1 Hz by the stress-relaxation method. A significant difference was found, with dMGS showing slow viscosity-dominated relaxation at 10(-4) to 10(-3) Hz, whereas nMGS remained predominantly elastic over the whole range. A Cole-Cole plot revealed two characteristic processes contributing to the relaxation, fast (on the scale of characteristic time τ 100 s), the latter prevailing in dMGS films. Brewster angle microscopy revealed better spreading of nMGS at the air-water interface, whereas dMGS layers were non-uniform and patchy. The distinctions in the interfacial properties of the films in vitro correlated with the accelerated degradation of meibum layer pattern at the air-tear interface and with the decreased stability of TF in vivo. These results, and also recent findings on the modest capability of meibum to suppress the evaporation of the aqueous subphase, suggest the need for a re-evaluation of the role of MGS. The probable key function of meibomian lipids might be to form viscoelastic films capable of opposing dilation of the air-tear interface. The impact of temperature on the meibum surface properties is discussed in terms of its possible effect on the normal structure of the film.

  5. Normal emission photoelectron diffraction: a new technique for determining surface structure

    International Nuclear Information System (INIS)

    Kevan, S.D.

    1980-05-01

    One technique, photoelectron diffraction (PhD) is characterized. It has some promise in surmounting some of the problems of LEED. In PhD, the differential (angle-resolved) photoemission cross-section of a core level localized on an adsorbate atom is measured as a function of some final state parameter. The photoemission final state consists of two components, one of which propagates directly to the detector and another which scatters off the surface and then propagates to the detector. These are added coherently, and interference between the two manifests itself as cross-section oscillations which are sensitive to the local structure around the absorbing atom. We have shown that PhD deals effectively with two- and probably also three-dimensionally disordered systems. Its non-damaging and localized, atom-specific nature gives PhD a good deal of promise in dealing with molecular overlayer systems. It is concluded that while PhD will never replace LEED, it may provide useful, complementary and possibly also more accurate surface structural information

  6. Environmental scanning electron microscopy of the surface of normal and vitrified leaves of Gypsophila paniculata (Babies Breath) cultured in vitro.

    Science.gov (United States)

    Gribble, K; Sarafis, V; Nailon, J; Holford, P; Uwins, P

    1996-06-01

    Leaf surfaces of non-tissue-cultured, vitrified and non-vitrified plantlets of Gypsophila paniculata (Babies Breath) were examined using an environmental scanning electron microscope. Non-tissue-cultured plants had a complete epidermal surface, recessed stomata and wax present on the leaf surface. The surface of tissue-cultured plantlets appeared similar to non-tissue-cultured plants excepting stomata were slightly protruding and less wax appeared to be present. In both non-tissue-cultured and tissue-cultured plants stomata were found both opened and closed and were observed closing. In contrast vitrified plantlets had abnormal, malformed stomata which appeared non-functional. The ventral surfaces of leaves seemed more normal than the dorsal, this may be due to the former receiving more light. Additionally, discontinuities were found in the epidermis. Often epidermal holes were found in association with stomatal apertures. It is suggested that the main cause of desiccation of vitrified G. paniculata plantlets ex vitro is due to loss of water from the discontinuity in epidermis and not because of non-functional stomata. Liquid water could be seen through the epidermal holes indicating that at least some of the extra water in vitrified plantlets is contained in the intercellular spaces.

  7. Comparative study of normal and branched alkane monolayer films adsorbed on a solid surface. I. Structure

    DEFF Research Database (Denmark)

    Enevoldsen, Ann Dorrit; Hansen, Flemming Yssing; Diama, A.

    2007-01-01

    their backbone and squalane has, in addition, six methyl side groups. Upon adsorption, there are significant differences as well as similarities in the behavior of these molecular films. Both molecules form ordered structures at low temperatures; however, while the melting point of the two-dimensional (2D......The structure of a monolayer film of the branched alkane squalane (C30H62) adsorbed on graphite has been studied by neutron diffraction and molecular dynamics (MD) simulations and compared with a similar study of the n-alkane tetracosane (n-C24H52). Both molecules have 24 carbon atoms along...... temperature. The neutron diffraction data show that the translational order in the squalane monolayer is significantly less than in the tetracosane monolayer. The authors' MD simulations suggest that this is caused by a distortion of the squalane molecules upon adsorption on the graphite surface. When...

  8. The relationship between the incisor position and lingual surface morphology in normal occlusion.

    Science.gov (United States)

    Hasegawa, Yuh; Ezura, Akira; Nomintsetseg, Batbayar

    2017-01-01

    This study aimed to investigate the relationship between the morphological characteristics of maxillary incisors and the anterior occlusion. The study materials comprised dental casts and lateral cephalograms of 26 modern Mongolian females with Angle Class I normal occlusion (mean age, 21 years 5 months). Computed tomography (CT) images of the dental casts were taken with an X-ray micro-CT system (SMX-100CT, Shimadzu, Kyoto Japan). The thickness of the marginal ridges and incisal edges, and the overjet and overbite, was measured on the three-dimensional images of the dental casts. On the lateral cephalogram, maxillary incisor to sella-nasion plane angle (U1 to SN angle), maxillary incisor to nasion-point A plane distance (U1 to NA distance), mandibular incisor to nasion-point B plane distance (L1 to NB distance), incisor mandibular plane angle, and interincisal angle were measured by tracing the left incisors of the maxilla and mandible. Spearman's single rank correlation coefficients were used to investigate any correlation between measurement items for each maxillary incisor. The thickness of the marginal ridges and incisal edges was positively correlated with the overbite. The thickness of the incisal edges was positively correlated with the irregularity index of the maxilla. There were significant negative correlations between overbite and U1 to SN angle, U1 to NA distance, and L1 to NB distance. Significant positive correlations were noted between the overbite and the overjet. In conclusion, there was no strong relationship between the morphological characteristics of maxillary incisors and the anterior occlusion.

  9. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  10. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  11. Two-dimensional echocardiographic right ventricle measurements adjusted to body mass index and surface area in a normal population.

    Science.gov (United States)

    Eslami, Masood; Larti, Farnoush; Larry, Mehrdad; Molaee, Parisa; Badkoobeh, Roya Sattarzadeh; Tavoosi, Anahita; Safari, Saeed; Parsa, Amir Farhang Zand

    2017-05-01

    To determine reference echocardiographic values in a normal population and assess their correlation with body mass index (BMI) and body surface area. An expert cardiologist performed two-dimensional echocardiography with triplicate right ventricle (RV) size measurements in 80 subjects with normal heart condition. Results were correlated with anthropometric data. Base-to-apex length in four-chamber view (RVD3) and above-pulmonic valve in short-axis view in males, as well as mid-RV diameter in standard four-chamber view (RVD), basal RV diameter, and mid RV diameter in RV-focused four-chamber view in females, were significantly correlated with BMI. All RV variables were significantly correlated with BMI in 20-30-year-old subjects. All RV variables except RVD3 and above-aortic valve in short-axis view (proximal) were significantly correlated with BMI in 35-55-year-old subjects. All RV parameters were significantly correlated with body surface area, except for RVD and in 20-35-year-old subjects. RV echocardiographic values must be adjusted to anthropometric characteristics for proper diagnosis and management of cardiac disorders. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 45:204-210, 2017. © 2016 Wiley Periodicals, Inc.

  12. Inflammatory Cytokine Tumor Necrosis Factor α Confers Precancerous Phenotype in an Organoid Model of Normal Human Ovarian Surface Epithelial Cells

    Directory of Open Access Journals (Sweden)

    Joseph Kwong

    2009-06-01

    Full Text Available In this study, we established an in vitro organoid model of normal human ovarian surface epithelial (HOSE cells. The spheroids of these normal HOSE cells resembled epithelial inclusion cysts in human ovarian cortex, which are the cells of origin of ovarian epithelial tumor. Because there are strong correlations between chronic inflammation and the incidence of ovarian cancer, we used the organoid model to test whether protumor inflammatory cytokine tumor necrosis factor α would induce malignant phenotype in normal HOSE cells. Prolonged treatment of tumor necrosis factor α induced phenotypic changes of the HOSE spheroids, which exhibited the characteristics of precancerous lesions of ovarian epithelial tumors, including reinitiation of cell proliferation, structural disorganization, epithelial stratification, loss of epithelial polarity, degradation of basement membrane, cell invasion, and overexpression of ovarian cancer markers. The result of this study provides not only an evidence supporting the link between chronic inflammation and ovarian cancer formation but also a relevant and novel in vitro model for studying of early events of ovarian cancer.

  13. Identification of a developmental gene expression signature, including HOX genes, for the normal human colonic crypt stem cell niche: overexpression of the signature parallels stem cell overpopulation during colon tumorigenesis.

    Science.gov (United States)

    Bhatlekar, Seema; Addya, Sankar; Salunek, Moreh; Orr, Christopher R; Surrey, Saul; McKenzie, Steven; Fields, Jeremy Z; Boman, Bruce M

    2014-01-15

    Our goal was to identify a unique gene expression signature for human colonic stem cells (SCs). Accordingly, we determined the gene expression pattern for a known SC-enriched region--the crypt bottom. Colonic crypts and isolated crypt subsections (top, middle, and bottom) were purified from fresh, normal, human, surgical specimens. We then used an innovative strategy that used two-color microarrays (∼18,500 genes) to compare gene expression in the crypt bottom with expression in the other crypt subsections (middle or top). Array results were validated by PCR and immunostaining. About 25% of genes analyzed were expressed in crypts: 88 preferentially in the bottom, 68 in the middle, and 131 in the top. Among genes upregulated in the bottom, ∼30% were classified as growth and/or developmental genes including several in the PI3 kinase pathway, a six-transmembrane protein STAMP1, and two homeobox (HOXA4, HOXD10) genes. qPCR and immunostaining validated that HOXA4 and HOXD10 are selectively expressed in the normal crypt bottom and are overexpressed in colon carcinomas (CRCs). Immunostaining showed that HOXA4 and HOXD10 are co-expressed with the SC markers CD166 and ALDH1 in cells at the normal crypt bottom, and the number of these co-expressing cells is increased in CRCs. Thus, our findings show that these two HOX genes are selectively expressed in colonic SCs and that HOX overexpression in CRCs parallels the SC overpopulation that occurs during CRC development. Our study suggests that developmental genes play key roles in the maintenance of normal SCs and crypt renewal, and contribute to the SC overpopulation that drives colon tumorigenesis.

  14. Articular surface approximation in equivalent spatial parallel mechanism models of the human knee joint: an experiment-based assessment.

    Science.gov (United States)

    Ottoboni, A; Parenti-Castelli, V; Sancisi, N; Belvedere, C; Leardini, A

    2010-01-01

    In-depth comprehension of human joint function requires complex mathematical models, which are particularly necessary in applications of prosthesis design and surgical planning. Kinematic models of the knee joint, based on one-degree-of-freedom equivalent mechanisms, have been proposed to replicate the passive relative motion between the femur and tibia, i.e., the joint motion in virtually unloaded conditions. In the mechanisms analysed in the present work, some fibres within the anterior and posterior cruciate and medial collateral ligaments were taken as isometric during passive motion, and articulating surfaces as rigid. The shapes of these surfaces were described with increasing anatomical accuracy, i.e. from planar to spherical and general geometry, which consequently led to models with increasing complexity. Quantitative comparison of the results obtained from three models, featuring an increasingly accurate approximation of the articulating surfaces, was performed by using experimental measurements of joint motion and anatomical structure geometries of four lower-limb specimens. Corresponding computer simulations of joint motion were obtained from the different models. The results revealed a good replication of the original experimental motion by all models, although the simulations also showed that a limit exists beyond which description of the knee passive motion does not benefit considerably from further approximation of the articular surfaces.

  15. Normal and anomalous transport phenomena in two-dimensional NaCl, MoS2 and honeycomb surfaces

    Science.gov (United States)

    Mbemmo, A. M. Fopossi; Kenmoé, G. Djuidjé; Kofané, T. C.

    2018-04-01

    Understanding the effects of anisotropy and substrate shape on the stochastic processes is critically needed for the improvement of the quality of the transport information. The effect of biharmonic force on the transport phenomena of a particle in two-dimensional is investigated in the framework of three representative substrate lattices: NaCl, MoS2 and honeycomb. We focus on the particles drift velocity, to characterize the transport properties in the system. Normal and anomalous transport are identified for a particular set of the system parameters such as the biharmonic parameter, the bias force, the phase-lag of two signals, as well as the noise amplitude. According to the direction ψ where the bias force is applied, we determine the biharmonic parameter ɛ for the presence of anomalous transport and show that for the NaCl surface, the anomalous transport is observed for 2 transport is generated for 0 ⩽ ɛ 30 °.

  16. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  17. Studies on Impingement Effects of Low Density Jets on Surfaces — Determination of Shear Stress and Normal Pressure

    Science.gov (United States)

    Sathian, Sarith. P.; Kurian, Job

    2005-05-01

    This paper presents the results of the Laser Reflection Method (LRM) for the determination of shear stress due to impingement of low-density free jets on flat plate. For thin oil film moving under the action of aerodynamic boundary layer the shear stress at the air-oil interface is equal to the shear stress between the surface and air. A direct and dynamic measurement of the oil film slope is measured using a position sensing detector (PSD). The thinning rate of oil film is directly measured which is the major advantage of the LRM over LISF method. From the oil film slope history, direct calculation of the shear stress is done using a three-point formula. For the full range of experiment conditions Knudsen numbers varied till the continuum limit of the transition regime. The shear stress values for low-density flows in the transition regime are thus obtained using LRM and the measured values of shear show fair agreement with those obtained by other methods. Results of the normal pressure measurements on a flat plate in low-density jets by using thermistors as pressure sensors are also presented in the paper. The normal pressure profiles obtained show the characteristic features of Newtonian impact theory for hypersonic flows.

  18. PLZT Electrooptic Ceramic Photonic Devices for Surface-Normal Operation in Trenches Cut Across Arrays of Optical Fiber

    Science.gov (United States)

    Hirabayashi, Katsuhiko

    2005-03-01

    Simple Pb_1-x La_x(Zr_y Ti_z)_1-x/4 O3 (PLZT) electrooptic ceramic photonic device arrays for surface-normal operation have been developed for application to polarization-controller arrays and Fabry-Pérot tunable filter arrays. These arrays are inserted in trenches cut across fiber arrays. Each element of the arrayed structure corresponds to one optical beam and takes the form of a cell. Each sidewall of the cell (width: 50-80 μm) is coated to form an electrode. The arrays have 16 elements at a pitch of 250 μm. The phase modulator has about 1 dB of loss and a half-wavelength voltage of 120 V. A cascade of two PLZT phase modulators (thickness: 300 μm), with each attached to a polyimide lambda/2 plate (thickness:15 μm), is capable of converting an arbitrary polarization to the transverse-electric (TE) or transverse-magnetic (TM) polarization. The response time is 1 μs. The Fabry-Pérot tunable filters have a thickness of 50 μm . The front and back surfaces of each cell are coated by 99%-reflective mirror. The free spectral range (FSR) of the filters is about 10 nm, tunable range is about 10 nm, loss is 2.2 dB, and finesse is 150. The tuning speed of these devices is high, taking only 1 μs.

  19. Loss of surface horizon of an irrigated soil detected by radiometric images of normalized difference vegetation index.

    Science.gov (United States)

    Fabian Sallesses, Leonardo; Aparicio, Virginia Carolina; Costa, Jose Luis

    2017-04-01

    The use of the soil in the Humid Pampa of Argentina has changed since the mid-1990s from agricultural-livestock production (that included pastures with direct grazing) to a purely agricultural production. Also, in recent years the area under irrigation by central pivot has been increased to 150%. The waters used for irrigation are sodium carbonates. The combination of irrigation and rain increases the sodium absorption ratio of soil (SARs), consequently raising the clay dispersion and reducing infiltration. This implies an increased risk of soil loss. A reduction in the development of white clover crop (Trifolium repens L.) was observed at an irrigation plot during 2015 campaign. The clover was planted in order to reduce the impact of two maize (Zea mays L.) campaigns under irrigation, which had increased soil SAR and deteriorated soil structure. SPOT-5 radiometric normalized difference vegetation index (NDVI) images were used to determine two zones of high and low production. In each zone, four random points were selected for further geo-referenced field sampling. Two geo-referenced measures of effective depth and surface soil sampling were carried out in each point. Texture of soil samples was determined by Pipette Method of Sedimentation Analysis. Data exploratory analysis showed that low production zone had a media effective depth = 80 cm and silty clay loam texture, while high production zone had a media effective depth > 140 cm and silt loam texture. The texture class of the low production zone did not correspond to prior soil studies carried out by the INTA (National Institute of Agricultural Technology), which showed that those soil textures were silt loam at surface and silty clay loam at sub-surface. The loss of the A horizon is proposed as a possible explanation, but further research is required. Besides, the need of a soil cartography actualization, which integrates new satellite imaging technologies and geo-referenced measurements with soil sensors is

  20. Differential Proteomic Analysis of Human Placenta-Derived Mesenchymal Stem Cells Cultured on Normal Tissue Culture Surface and Hyaluronan-Coated Surface

    Directory of Open Access Journals (Sweden)

    Tzyy Yue Wong

    2016-01-01

    Full Text Available Our previous results showed that hyaluronan (HA preserved human placenta-derived mesenchymal stem cells (PDMSC in a slow cell cycling mode similar to quiescence, the pristine state of stem cells in vivo, and HA was found to prevent murine adipose-derived mesenchymal stem cells from senescence. Here, stable isotope labeling by amino acid in cell culture (SILAC proteomic profiling was used to evaluate the effects of HA on aging phenomenon in stem cells, comparing (1 old and young passage PDMSC cultured on normal tissue culture surface (TCS; (2 old passage on HA-coated surface (CHA compared to TCS; (3 old and young passage on CHA. The results indicated that senescence-associated protein transgelin (TAGLN was upregulated in old TCS. Protein CYR61, reportedly senescence-related, was downregulated in old CHA compared to old TCS. The SIRT1-interacting Nicotinamide phosphoribosyltransferase (NAMPT increased by 2.23-fold in old CHA compared to old TCS, and is 0.48-fold lower in old TCS compared to young TCS. Results also indicated that components of endoplasmic reticulum associated degradation (ERAD pathway were upregulated in old CHA compared to old TCS cells, potentially for overcoming stress to maintain cell function and suppress senescence. Our data points to pathways that may be targeted by HA to maintain stem cells youth.

  1. Parallel optical interconnect between surface-mounted devices on FR4 printed wiring board using embedded waveguides and passive optical alignments

    Science.gov (United States)

    Karppinen, Mikko; Alajoki, Teemu; Tanskanen, Antti; Kataja, Kari; Mäkinen, Jukka-Tapani; Karioja, Pentti; Immonen, Marika; Kivilahti, Jorma

    2006-04-01

    Technologies to design and fabricate high-bit-rate chip-to-chip optical interconnects on printed wiring boards (PWB) are studied. The aim is to interconnect surface-mounted component packages or modules using board-embedded optical waveguides. In order to demonstrate the developed technologies, a parallel optical interconnect was integrated on a standard FR4-based PWB. It consists of 4-channel BGA-mounted transmitter and receiver modules as well as of four polymer multimode waveguides fabricated on top of the PWB using lithographic patterning. The transmitters and receivers built on low-temperature co-fired ceramic (LTCC) substrates include flip-chip mounted VCSEL or photodiode array and 4x10 Gb/s driver or receiver IC. Two microlens arrays and a surface-mounted micro-mirror enable optical coupling between the optoelectronic device and the waveguide array. The optical alignment is based on the marks and structures fabricated in both the LTCC and optical waveguide processes. The structures were optimized and studied by the use of optical tolerance analyses based on ray tracing. The characterized optical alignment tolerances are in the limits of the accuracy of the surface-mount technology.

  2. Design and optimization of a new geometric texture shape for the enhancement of hydrodynamic lubrication performance of parallel slider surfaces

    Directory of Open Access Journals (Sweden)

    M.S. Uddin

    2016-06-01

    Full Text Available This paper presents design and optimization of a new ‘star-like’ texture shape with an aim to improve the tribological performance. Initial studies showed that the triangle effect is the most dominant in reducing the friction. Motivated with the triangle effect, a ‘star-like’ texture shape consisting of a series of triangular spikes around the centre of the texture is proposed. It is hypothesized that by increasing the triangular effect on a texture shape, the converging micro-wedge effect is expected to increase, hence increasing the film pressure and reducing the friction. Using the well-known Reynolds equation, numerical modelling of surface texturing is implemented via finite difference method. Simulation results showed that the number of apex points of the new ‘star-like’ texture has a significant effect on the film pressure and the friction coefficient. A 6-pointed texture at a texture density of 0.4 is shown to be the optimum shape. The new optimum star-like texture reduces the friction coefficient by 80%, 64.39%, 19.32% and 16.14%, as compared to ellipse, chevron, triangle and circle, respectively. This indicates the potential benefit of the proposed new shape in further enhancing the hydrodynamic lubrication performance of slider bearing contacts.

  3. Modeling guided wave excitation in plates with surface mounted piezoelectric elements: coupled physics and normal mode expansion

    Science.gov (United States)

    Ren, Baiyang; Lissenden, Cliff J.

    2018-04-01

    Guided waves have been extensively studied and widely used for structural health monitoring because of their large volumetric coverage and good sensitivity to defects. Effectively and preferentially exciting a desired wave mode having good sensitivity to a certain defect is of great practical importance. Piezoelectric discs and plates are the most common types of surface-mounted transducers for guided wave excitation and reception. Their geometry strongly influences the proportioning between excited modes as well as the total power of the excited modes. It is highly desirable to predominantly excite the selected mode while the total transduction power is maximized. In this work, a fully coupled multi-physics finite element analysis, which incorporates the driving circuit, the piezoelectric element and the wave guide, is combined with the normal mode expansion method to study both the mode tuning and total wave power. The excitation of circular crested waves in an aluminum plate with circular piezoelectric discs is numerically studied for different disc and adhesive thicknesses. Additionally, the excitation of plane waves in an aluminum plate, using a stripe piezoelectric element is studied both numerically and experimentally. It is difficult to achieve predominant single mode excitation as well as maximum power transmission simultaneously, especially for higher order modes. However, guidelines for designing the geometry of piezoelectric elements for optimal mode excitation are recommended.

  4. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    Directory of Open Access Journals (Sweden)

    Changjae Kim

    2016-01-01

    Full Text Available Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1 reduces the dimensions of the attribute space; (2 considers the attribute similarity and the proximity of the laser point simultaneously; and (3 works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  5. Modeling and experimental study of oil/water contact angle on biomimetic micro-parallel-patterned self-cleaning surfaces of selected alloys used in water industry

    Energy Technology Data Exchange (ETDEWEB)

    Nickelsen, Simin; Moghadam, Afsaneh Dorri, E-mail: afsaneh@uwm.edu; Ferguson, J.B.; Rohatgi, Pradeep

    2015-10-30

    Graphical abstract: - Highlights: • Wetting behavior of four metallic materials as a function of surface roughness has been studied. • A model to predict the abrasive particle size and water/oil contact angles relationship is proposed. • Active wetting regime is determined in different materials using the proposed model. - Abstract: In the present study, the wetting behavior of surfaces of various common metallic materials used in the water industry including C84400 brass, commercially pure aluminum (99.0% pure), Nickle–Molybdenum alloy (Hastelloy C22), and 316 Stainless Steel prepared by mechanical abrasion and contact angles of several materials after mechanical abrasion were measured. A model to estimate roughness factor, R{sub f}, and fraction of solid/oil interface, ƒ{sub so}, for surfaces prepared by mechanical abrasion is proposed based on the assumption that abrasive particles acting on a metallic surface would result in scratches parallel to each other and each scratch would have a semi-round cross-section. The model geometrically describes the relation between sandpaper particle size and water/oil contact angle predicted by both the Wenzel and Cassie–Baxter contact type, which can then be used for comparison with experimental data to find which regime is active. Results show that brass and Hastelloy followed Cassie–Baxter behavior, aluminum followed Wenzel behavior and stainless steel exhibited a transition from Wenzel to Cassie–Baxter. Microstructural studies have also been done to rule out effects beyond the Wenzel and Cassie–Baxter theories such as size of structural details.

  6. Poiseuille, thermal transpiration and Couette flows of a rarefied gas between plane parallel walls with nonuniform surface properties in the transverse direction and their reciprocity relations

    Science.gov (United States)

    Doi, Toshiyuki

    2018-04-01

    Slow flows of a rarefied gas between two plane parallel walls with nonuniform surface properties are studied based on kinetic theory. It is assumed that one wall is a diffuse reflection boundary and the other wall is a Maxwell-type boundary whose accommodation coefficient varies periodically in the direction perpendicular to the flow. The time-independent Poiseuille, thermal transpiration and Couette flows are considered. The flow behavior is numerically studied based on the linearized Bhatnagar–Gross–Krook–Welander model of the Boltzmann equation. The flow field, the mass and heat flow rates in the gas, and the tangential force acting on the wall surface are studied over a wide range of the gas rarefaction degree and the parameters characterizing the distribution of the accommodation coefficient. The locally convex velocity distribution is observed in Couette flow of a highly rarefied gas, similarly to Poiseuille flow and thermal transpiration. The reciprocity relations are numerically confirmed over a wide range of the flow parameters.

  7. Root surface areas of maxillary permanent teeth in anterior normal overbite and anterior open bite assessed using cone-beam computed tomography.

    Science.gov (United States)

    Suteerapongpun, Piyadanai; Sirabanchongkran, Supassara; Wattanachai, Tanapan; Sriwilas, Patiyut; Jotikasthira, Dhirawat

    2017-12-01

    The aim of this study was to compare the root surface areas of the maxillary permanent teeth in Thai patients exhibiting anterior normal overbite and in those exhibiting anterior open bite, using cone-beam computed tomography (CBCT). CBCT images of maxillary permanent teeth from 15 patients with anterior normal overbite and 18 patients with anterior open bite were selected. Three-dimensional tooth models were constructed using Mimics Research version 17.0. The cementoenamel junction was marked manually. The root surface area was calculated automatically by 3-Matic Research version 9.0. The root surface areas of each tooth type from both types of bite were compared using the independent t-test ( P <.05). The intraclass correlation coefficient was used to assess intraobserver reliability. The mean root surface areas of the maxillary central and lateral incisors in individuals with anterior open bite were significantly less than those in those with normal bite. The mean root surface area of the maxillary second premolar in individuals with anterior open bite was significantly greater than in those with normal bite. Anterior open-bite malocclusion might affect the root surface area, so orthodontic force magnitudes should be carefully determined.

  8. How Parallel Are Excited State Potential Energy Surfaces from Time-Independent and Time-Dependent DFT? A BODIPY Dye Case Study.

    Science.gov (United States)

    Komoto, Keenan T; Kowalczyk, Tim

    2016-10-06

    To support the development and characterization of chromophores with targeted photophysical properties, excited-state electronic structure calculations should rapidly and accurately predict how derivatization of a chromophore will affect its excitation and emission energies. This paper examines whether a time-independent excited-state density functional theory (DFT) approach meets this need through a case study of BODIPY chromophore photophysics. A restricted open-shell Kohn-Sham (ROKS) treatment of the S 1 excited state of BODIPY dyes is contrasted with linear-response time-dependent density functional theory (TDDFT). Vertical excitation energies predicted by the two approaches are remarkably different due to overestimation by TDDFT and underestimation by ROKS relative to experiment. Overall, ROKS with a standard hybrid functional provides the more accurate description of the S 1 excited state of BODIPY dyes, but excitation energies computed by the two methods are strongly correlated. The two approaches also make similar predictions of shifts in the excitation energy upon functionalization of the chromophore. TDDFT and ROKS models of the S 1 potential energy surface are then examined in detail for a representative BODIPY dye through molecular dynamics sampling on both model surfaces. We identify the most significant differences in the sampled surfaces and analyze these differences along selected normal modes. Differences between ROKS and TDDFT descriptions of the S 1 potential energy surface for this BODIPY derivative highlight the continuing need for validation of widely used approximations in excited state DFT through experimental benchmarking and comparison to ab initio reference data.

  9. Surface-EMG analysis for the quantification of thigh muscle dynamic co-contractions during normal gait.

    Science.gov (United States)

    Strazza, Annachiara; Mengarelli, Alessandro; Fioretti, Sandro; Burattini, Laura; Agostini, Valentina; Knaflitz, Marco; Di Nardo, Francesco

    2017-01-01

    The research purpose was to quantify the co-contraction patterns of quadriceps femoris (QF) vs. hamstring muscles during free walking, in terms of onset-offset muscular activation, excitation intensity, and occurrence frequency. Statistical gait analysis was performed on surface-EMG signals from vastus lateralis (VL), rectus femoris (RF), and medial hamstrings (MH), in 16315 strides walked by 30 healthy young adults. Results showed full superimpositions of MH with both VL and RF activity from terminal swing, 80 to 100% of gait cycle (GC), to the successive loading response (≈0-15% of GC), in around 90% of the considered strides. A further superimposition was detected during the push-off phase both between VL and MH activation intervals (38.6±12.8% to 44.1±9.6% of GC) in 21.9±13.6% of strides, and between RF and MH activation intervals (45.9±5.3% to 50.7±9.7 of GC) in 32.7±15.1% of strides. These findings led to identify three different co-contractions among QF and hamstring muscles during able-bodied walking: in early stance (in ≈90% of strides), in push-off (in 25-30% of strides) and in terminal swing (in ≈90% of strides). The co-contraction in terminal swing is the one with the highest levels of muscle excitation intensity. To our knowledge, this analysis represents the first attempt for quantification of QF/hamstring muscles co-contraction in young healthy subjects during normal gait, able to include the physiological variability of the phenomenon. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  11. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  12. Coronary anatomy characteristics in patients with isolated right bundle branch block versus subjects with normal surface electrocardiogram.

    Science.gov (United States)

    Pakbaz, Marziyeh; Kazemisaeid, Ali; Yaminisharif, Ahmad; Davoodi, Gholamreza; Tokaldany, Masoumeh Lotfi; Hakki, Elham

    2013-03-01

    Isolated right bundle branch block is a common finding in the general population. It may be associated with variations in detailed coronary anatomy characteristics. The aim of this study was to investigate the coronary anatomy in patients with isolated right bundle branch block and to compare that with normal individuals. In this case-control study we investigated the coronary anatomy by reviewing angiographic films in two groups of normal coronary artery patients: patients with right bundle branch block (RBBB) (n = 92) and those with normal electrocardiograms (n = 184). There was no significant difference between the two groups in terms of diminutive left anterior descending artery, dominancy, number of obtuse marginal artery, diagonal, acute marginal artery, the position of the first septal versus diagonal branch, presence of ramus artery, and size of left main artery. The number of septal branches was higher in the case group (p-value right circulatory system was more common in both groups but cases showed more tendency to follow this pattern (p-value = 0.021). The frequency of the normal conus branch was higher in the cases versus controls (p-value = 0.009). Coronary anatomy characteristics are somewhat different in subjects with RBBB compared to normal individuals.

  13. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  14. Plantar pressure differences among adults with mild flexible flatfoot, severe flexible flatfoot and normal foot when walking on level surface, walking upstairs and downstairs.

    Science.gov (United States)

    Zhai, Jun Na; Wang, Jue; Qiu, Yu Sheng

    2017-04-01

    [Purpose] This study observed the plantar pressure between flexible flatfoot and normal foot on different walking conditions to find out if flexible flatfoot needs the treatment and how the plantar pressure change while walking upstairs and downstairs. [Subjects and Methods] Fifteen adults with mild flexible flatfoot, fifteen adults with severe flexible flatfoot and fifteen adults with normal foot were examined while walking on a level surface, walking up and down 10 cm and 20 cm stairs. The max force and the arch index were acquired using the RSscan system. The repeated measures ANOVA was performed to analyze the data. [Results] Compared with normal foot, both max force and arch index of severe flatfoot were significantly increased on different walking conditions. When walking down 10 cm and 20 cm stairs, the plantar data of both normal foot and flatfoot were significantly increased. [Conclusion] The plantar pressure of severe flexible flatfoot were significantly larger than that of normal foot on different walking conditions. In addition, the arches of both normal foot and flatfoot were obviously deformed when walking downstairs. It is therefore necessary to be treated for severe flexible flatfoot to prevent further deformation.

  15. Normalized lift: an energy interpretation of the lift coefficient simplifies comparisons of the lifting ability of rotating and flapping surfaces.

    Directory of Open Access Journals (Sweden)

    Phillip Burgers

    Full Text Available For a century, researchers have used the standard lift coefficient C(L to evaluate the lift, L, generated by fixed wings over an area S against dynamic pressure, ½ρv(2, where v is the effective velocity of the wing. Because the lift coefficient was developed initially for fixed wings in steady flow, its application to other lifting systems requires either simplifying assumptions or complex adjustments as is the case for flapping wings and rotating cylinders.This paper interprets the standard lift coefficient of a fixed wing slightly differently, as the work exerted by the wing on the surrounding flow field (L/ρ·S, compared against the total kinetic energy required for generating said lift, ½v(2. This reinterpreted coefficient, the normalized lift, is derived from the work-energy theorem and compares the lifting capabilities of dissimilar lift systems on a similar energy footing. The normalized lift is the same as the standard lift coefficient for fixed wings, but differs for wings with more complex motions; it also accounts for such complex motions explicitly and without complex modifications or adjustments. We compare the normalized lift with the previously-reported values of lift coefficient for a rotating cylinder in Magnus effect, a bat during hovering and forward flight, and a hovering dipteran.The maximum standard lift coefficient for a fixed wing without flaps in steady flow is around 1.5, yet for a rotating cylinder it may exceed 9.0, a value that implies that a rotating cylinder generates nearly 6 times the maximum lift of a wing. The maximum normalized lift for a rotating cylinder is 1.5. We suggest that the normalized lift can be used to evaluate propellers, rotors, flapping wings of animals and micro air vehicles, and underwater thrust-generating fins in the same way the lift coefficient is currently used to evaluate fixed wings.

  16. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  17. Enhancing data parallel aplications with task parallelism

    OpenAIRE

    Fernández, Jacqueline; Guerrero, Roberto A.; Piccoli, María Fabiana; Printista, Alicia Marcela; Villalobos, M.

    2001-01-01

    Most parallel applications contain data parallelism and almost all discussion of its solutions has limited to the simplest and least expressive form: flat data parallelism. Several generalization of the flat data parallel model have been proposed because a large number of those applications need a combination of task and data parallelism to represent their natural computation structure and to achieve good performance in their results. Their aim is to allow the capability of combining the easi...

  18. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  19. Tenskinmetric Evaluation of Surface Energy Changes in Adult Skin: Evidence from 834 Normal Subjects Monitored in Controlled Conditions

    Directory of Open Access Journals (Sweden)

    Camilla Dal Bosco

    2014-03-01

    Full Text Available To evaluate the influence of the skin aging critical level on the adult skin epidermal functional state, an improved analytical method based on the skin surface energetic measurement (TVS modeling was developed. Tenskinmetric measurements were carried out non-invasively in controlled conditions by contact angle method using only a water-drop as reference standard liquid. Adult skin was monitored by TVS Observatory according to a specific and controlled thermal protocol (Camianta protocol in use at the interconnected “Mamma Margherita Terme spa” of Terme Euganee. From June to November 2013, the surface free energy and the epidermal hydration level of adult skin were evaluated on arrival of 265 male and 569 female adult volunteers (51–90 years of age and when they departed 2 weeks later. Sensitive measurements were carried out at 0.1 mN/m. High test compliance was obtained (93.2% of all guests. Very interesting results are obtained. The high sensitivity and discrimination power of tenskinmetry combined with a thermal Camianta protocol demonstrate the possibility to evaluate at baseline level the surface energetic changes and the skin reactivity which occurs on adult skin.

  20. Silver nanoparticle based surface enhanced Raman scattering spectroscopy of diabetic and normal rat pancreatic tissue under near-infrared laser excitation

    International Nuclear Information System (INIS)

    Huang, H; Shi, H; Chen, W; Yu, Y; Lin, D; Xu, Q; Feng, S; Lin, J; Huang, Z; Li, Y; Chen, R

    2013-01-01

    This paper presents the use of high spatial resolution silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) from rat pancreatic tissue to obtain biochrmical information about the tissue. A high quality SERS signal from a mixture of pancreatic tissues and silver nanoparticles can be obtained within 10 s using a Renishaw micro-Raman system. Prominent SERS bands of pancreatic tissue were assigned to known molecular vibrations, such as the vibrations of DNA bases, RNA bases, proteins and lipids. Different tissue structures of diabetic and normal rat pancreatic tissues have characteristic features in SERS spectra. This exploratory study demonstrated great potential for using SERS imaging to distinguish diabetic and normal pancreatic tissues on frozen sections without using dye labeling of functionalized binding sites. (letter)

  1. Holocene Time-slip history of normal fault scarps in western Turkey: 36Cl surface exposure dating

    Science.gov (United States)

    Mozafari Amiri, N.; Sümer, Ö.; Tikhomirov, D.; Özkaymak, Ç.; Uzel, B.; Ivy-Ochs, S.; Vockenhuber, C.; Sözbilir, H.; Akçar, N.

    2016-12-01

    Bedrock fault scarps built in carbonates are the most direct evidence of past earthquakes to reconstruct long-term seismic outline using 36Cl cosmogenic nuclides. The western Anatolia is an active seismic region, in which several major graben systems are formed mainly in carbonates commenced by roughly N-S extensional regime since the early Miocene. The oldest known earthquake in the Eastern Mediterranean and Middle East dates back to 464 B.C. However, to evaluate the earthquake pattern, a complete seismic data over a large time-scale is required. For modelling of seismic periods, a Matlab® code is used based on acceleration of production rate of 36Cl following exposure of fresh material to cosmic rays. By measuring the amount of cosmogenic 36Cl versus height on the fault surface, the timing of significant ruptures and vertical displacements are explored. The best scenario is obtained with the minimum difference between the modelled and measured 36Cl. An ideal target spot is a minimum-eroded surface with length of at least two meters from the intersection of the fault with colluvium. After continuous marking of 10 cm height and 15 cm width on the fault, the samples of 3 cm thick are collected. The geometrical factors of scarp dip, scarp height, top surface dip and colluvium dip are measured. Topographic shielding, density of the fault scarp and colluvium are also estimated. Afterwards, the samples are physically and chemically prepared in laboratory for elemental analysis and AMS measurements. In this study, we collected 584 samples from seven major faults in western Anatolia. Our first results indicate five earthquake sequences in the Priene-Sazlı fault since early Holocene with a recurrence interval of approximately 2000 years and slip of 1.3 to 2.9 meters. The two most recent ruptures are correlated with 1955 and 68 AD earthquakes. A slip rate of roughly 1 mm/yr throughout the activity periods is estimated. Regarding the rupture length, the fault has potential

  2. Plane parallel radiance transport for global illumination in vegetation

    Energy Technology Data Exchange (ETDEWEB)

    Max, N.; Mobley, C.; Keating, B.; Wu, E.H.

    1997-01-05

    This paper applies plane parallel radiance transport techniques to scattering from vegetation. The leaves, stems, and branches are represented as a volume density of scattering surfaces, depending only on height and the vertical component of the surface normal. Ordinary differential equations are written for the multiply scattered radiance as a function of the height above the ground, with the sky radiance and ground reflectance as boundary conditions. They are solved using a two-pass integration scheme to unify the two-point boundary conditions, and Fourier series for the dependence on the azimuthal angle. The resulting radiance distribution is used to precompute diffuse and specular `ambient` shading tables, as a function of height and surface normal, to be used in rendering, together with a z-buffer shadow algorithm for direct solar illumination.

  3. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    Science.gov (United States)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  4. nth roots of normal contractions

    International Nuclear Information System (INIS)

    Duggal, B.P.

    1992-07-01

    Given a complex separable Hilbert space H and a contraction A on H such that A n , n≥2 some integer, is normal it is shown that if the defect operator D A = (1 - A * A) 1/2 is of the Hilbert-Schmidt class, then A is similar to a normal contraction, either A or A 2 is normal, and if A 2 is normal (but A is not) then there is a normal contraction N and a positive definite contraction P of trace class such that parallel to A - N parallel to 1 = 1/2 parallel to P + P parallel to 1 (where parallel to · parallel to 1 denotes the trace norm). If T is a compact contraction such that its characteristics function admits a scalar factor, if T = A n for some integer n≥2 and contraction A with simple eigen-values, and if both T and A satisfy a ''reductive property'', then A is a compact normal contraction. (author). 16 refs

  5. Early developmental expression of a normally tumor-associated and drug-inhibited cell surface-located NADH oxidase (ENOX2) in non-cancer cells.

    Science.gov (United States)

    Cho, NaMi; Morré, D James

    2009-04-01

    Full length mRNA to a drug-inhibited cell surface NADH oxidase, tNOX or ENOX2, is present in both non-cancer and cancer cells but is translated only in cancer cells as alternatively spliced variants. ENOX2 is a growth-related protein of the external plasma membrane surface that is shed into the circulation and is inhibited by a series of quinone site inhibitors with anticancer activity. To test the possibility that ENOX2 expression might be important to early stages of non-cancer cell development, the expression of the protein was monitored in chicken embryos during their development. Polyclonal antisera to a 34 kDa human serum form of ENOX2 cross-immunoreactive with the drug-responsive NADH oxidase of chicken hepatoma cells was used. The protein was identified based on drug-responsive enzymatic activities and analyses by western blots. The drug-responsive activity was associated with plasma membranes and sera of early chicken embryos and with chicken hepatoma plasma membranes but was absent from plasma membranes prepared from livers or from sera of normal adult chickens and from late embryo stages. The findings suggest that ENOX2 may fulfill some functions essential to the growth of early embryos which are lost in late embryo stages and absent from normal adult cells but which then reappear in cancer.

  6. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  7. Surface area normalized dissolution to study differences in itraconazole-copovidone solid dispersions prepared by spray-drying and hot melt extrusion.

    Science.gov (United States)

    Bhardwaj, Vivekanand; Trasi, Niraj S; Zemlyanov, Dmitry Y; Taylor, Lynne S

    2018-04-05

    Amorphous solid dispersions of itraconazole (ITZ) and copovidone (PVPVA 64) at 1:1 to 1:9 drug-polymer ratios were prepared using spray-drying (SD) and hot melt (HM) extrusion for comparative evaluation. Surface area normalized dissolution studies were carried out using a modified intrinsic dissolution rate (IDR) assembly and rate of release of drug as well as polymer were quantified using ultraviolet spectroscopy. The melt quenched amorphous form of ITZ provided an 18-fold dissolution advantage over the crystalline form. In general, dispersions prepared by either SD or HM showed similar dissolution profiles in terms of drug release. Both drug-controlled and polymer-controlled ITZ dissolution rates were observed, depending on the drug loading, where a switch from a drug-controlled to a polymer-controlled regime was observed when the drug loading was approximately 20% or lower. The impact of the spray drying solvent composition was studied and found to have a large effect on the drug release rate for dispersions containing a drug loading of 20%. Electron microscopy showed differences in surface morphology (scanning) and internal structure (transmission) in these dispersions as a function of solvent system. X-ray photoelectron spectroscopy (XPS) revealed differences in the surface composition of drug and polymer whereby poorly dissolving systems showed drug enrichment. This study provides insight into the complex interplay between formulation, processing and performance of amorphous solid dispersion systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Histochemical evidence for the differential surface labeling, uptake, and intracellular transport of a colloidal gold-labeled insulin complex by normal human blood cells

    International Nuclear Information System (INIS)

    Ackerman, G.A.; Wolken, K.W.

    1981-01-01

    A colloidal gold-labeled insulin-bovine serum albumin (GIA) reagent has been developed for the ultrastructural visualization of insulin binding sites on the cell surface and for tracing the pathway of intracellular insulin translocation. When applied to normal human blood cells, it was demonstrated by both visual inspection and quantitative analysis that the extent of surface labeling, as well as the rate and degree of internalization of the insulin complex, was directly related to cell type. Further, the pathway of insulin (GIA) transport via round vesicles and by tubulo-vesicles and saccules and its subsequent fate in the hemic cells was also related to cell variety. Monocytes followed by neutrophils bound the greatest amount of labeled insulin. The majority of lymphocytes bound and internalized little GIA, however, between 5-10% of the lymphocytes were found to bind considerable quantities of GIA. Erythrocytes rarely bound the labeled insulin complex, while platelets were noted to sequester large quantities of the GIA within their extracellular canalicular system. GIA uptake by the various types of leukocytic cells appeared to occur primarily by micropinocytosis and by the direct opening of cytoplasmic tubulo-vesicles and saccules onto the cell surface in regions directly underlying surface-bound GIA. Control procedures, viz., competitive inhibition of GIA labeling using an excess of unlabeled insulin in the incubation medium, preincubation of the GIA reagent with an antibody directed toward porcine insulin, and the incorporation of 125I-insulin into the GIA reagent, indicated the specificity and selectivity of the GIA histochemical procedure for the localization of insulin binding sites

  9. Constructing Fluorine-Free and Cost-Effective Superhydrophobic Surface with Normal-Alcohol-Modified Hydrophobic SiO2 Nanoparticles.

    Science.gov (United States)

    Ye, Hui; Zhu, Liqun; Li, Weiping; Liu, Huicong; Chen, Haining

    2017-01-11

    Superhydrophobic coatings have drawn much attention in recent years for their wide potential applications. However, a simple, cost-effective, and environmentally friendly approach is still lacked. Herein, a promising approach using nonhazardous chemicals was proposed, in which multiple hydrophobic functionalized silica nanoparticles (SiO 2 NPs) were first prepared as core component, through the efficient reaction between amino group containing SiO 2 NPs and the isocyanate containing hydrophobic surface modifiers synthesized by normal alcohols, followed by simply spraying onto various substrates for superhydrophobic functionalization. Furthermore, to further improve the mechanical durability, an organic-inorganic composite superhydrophobic coating was fabricated by incorporating cross-linking agent (polyisocyanate) into the mixture of hydrophobic-functionalized SiO 2 NPs and hydroxyl acrylic resin. The hybrid coating with cross-linked network structures is very stable with excellent mechanical durability, self-cleaning property and corrosion resistance.

  10. Identification of a Developmental Gene Expression Signature, Including HOX Genes, for the Normal Human Colonic Crypt Stem Cell Niche: Overexpression of the Signature Parallels Stem Cell Overpopulation During Colon Tumorigenesis

    OpenAIRE

    Bhatlekar, Seema; Addya, Sankar; Salunek, Moreh; Orr, Christopher R.; Surrey, Saul; McKenzie, Steven; Fields, Jeremy Z.; Boman, Bruce M.

    2013-01-01

    Our goal was to identify a unique gene expression signature for human colonic stem cells (SCs). Accordingly, we determined the gene expression pattern for a known SC-enriched region—the crypt bottom. Colonic crypts and isolated crypt subsections (top, middle, and bottom) were purified from fresh, normal, human, surgical specimens. We then used an innovative strategy that used two-color microarrays (∼18,500 genes) to compare gene expression in the crypt bottom with expression in the other cryp...

  11. Relationship of EchocardiographicZScores Adjusted for Body Surface Area to Age, Sex, Race, and Ethnicity: The Pediatric Heart Network Normal Echocardiogram Database.

    Science.gov (United States)

    Lopez, Leo; Colan, Steven; Stylianou, Mario; Granger, Suzanne; Trachtenberg, Felicia; Frommelt, Peter; Pearson, Gail; Camarda, Joseph; Cnota, James; Cohen, Meryl; Dragulescu, Andreea; Frommelt, Michele; Garuba, Olukayode; Johnson, Tiffanie; Lai, Wyman; Mahgerefteh, Joseph; Pignatelli, Ricardo; Prakash, Ashwin; Sachdeva, Ritu; Soriano, Brian; Soslow, Jonathan; Spurney, Christopher; Srivastava, Shubhika; Taylor, Carolyn; Thankavel, Poonam; van der Velde, Mary; Minich, LuAnn

    2017-11-01

    Published nomograms of pediatric echocardiographic measurements are limited by insufficient sample size to assess the effects of age, sex, race, and ethnicity. Variable methodologies have resulted in a wide range of Z scores for a single measurement. This multicenter study sought to determine Z scores for common measurements adjusted for body surface area (BSA) and stratified by age, sex, race, and ethnicity. Data collected from healthy nonobese children ≤18 years of age at 19 centers with a normal echocardiogram included age, sex, race, ethnicity, height, weight, echocardiographic images, and measurements performed at the Core Laboratory. Z score models involved indexed parameters (X/BSA α ) that were normally distributed without residual dependence on BSA. The models were tested for the effects of age, sex, race, and ethnicity. Raw measurements from models with and without these effects were compared, and race, and ethnicity for all outcomes, but all effects were clinically insignificant based on comparisons of models with and without the effects, resulting in Z scores independent of age, sex, race, and ethnicity for each measurement. Echocardiographic Z scores based on BSA were derived from a large, diverse, and healthy North American population. Age, sex, race, and ethnicity have small effects on the Z scores that are statistically significant but not clinically important. © 2017 American Heart Association, Inc.

  12. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  13. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  14. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  15. Comparison of Placido disc and Scheimpflug image-derived topography-guided excimer laser surface normalization combined with higher fluence CXL: the Athens Protocol, in progressive keratoconus

    Directory of Open Access Journals (Sweden)

    Kanellopoulos AJ

    2013-07-01

    Full Text Available Anastasios John Kanellopoulos,1,2 George Asimellis11Laservision.gr Eye Institute, Athens, Greece; 2New York University School of Medicine, Department of Opthalmology, NY, NY, USABackground: The purpose of this study was to compare the safety and efficacy of two alternative corneal topography data sources used in topography-guided excimer laser normalization, combined with corneal collagen cross-linking in the management of keratoconus using the Athens protocol, ie, a Placido disc imaging device and a Scheimpflug imaging device.Methods: A total of 181 consecutive patients with keratoconus who underwent the Athens protocol between 2008 and 2011 were studied preoperatively and at months 1, 3, 6, and 12 postoperatively for visual acuity, keratometry, and anterior surface corneal irregularity indices. Two groups were formed, depending on the primary source used for topoguided photoablation, ie, group A (Placido disc and group B (Scheimpflug rotating camera. One-year changes in visual acuity, keratometry, and seven anterior surface corneal irregularity indices were studied in each group.Results: Changes in visual acuity, expressed as the difference between postoperative and preoperative corrected distance visual acuity were +0.12 ± 0.20 (range +0.60 to -0.45 for group A and +0.19 ± 0.20 (range +0.75 to -0.30 for group B. In group A, K1 (flat keratometry changed from 45.202 ± 3.782 D to 43.022 ± 3.819 D, indicating a flattening of -2.18 D, and K2 (steep keratometry changed from 48.670 ± 4.066 D to 45.865 ± 4.794 D, indicating a flattening of -2.805 D. In group B, K1 (flat keratometry changed from 46.213 ± 4.082 D to 43.190 ± 4.398 D, indicating a flattening of -3.023 D, and K2 (steep keratometry changed from 50.774 ± 5.210 D to 46.380 ± 5.006 D, indicating a flattening of -4.394 D. For group A, the index of surface variance decreased to -5.07% and the index of height decentration to -26.81%. In group B, the index of surface variance

  16. Drought to flood: a comparative assessment of four parallel surface water treatments during the 2010-2012 inflows to the Murray-Darling Basin, South Australia.

    Science.gov (United States)

    Braun, Kalan; Fabris, Rolando; Morran, Jim; Ho, Lionel; Drikas, Mary

    2014-08-01

    Four treatment processes; conventional coagulation, magnetic ion exchange (MIEX)/coagulation, with and without granular activated carbon (GAC), and membrane treatment combining microfiltration (MF) and nanofiltration (NF), were operated in parallel using the same source water from the Murray-Darling basin in South Australia. During the two year study, high levels of natural organic matter and turbidity arising from floods affecting the Murray-Darling basin in 2010-2012 challenged the four processes. The comparative study indicated that all four processes could effectively meet basic water quality guidelines of turbidity and colour despite challenging source water quality but that the more advanced treatments improved overall organic and bacterial removal. Interestingly, the high organics and turbidity arising from the floods resulted in improved treatment efficiency for all treatments incorporating coagulation to the extent that, despite flood conditions, treated water quality could remain comparatively constant provided that the process was operated and optimised effectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  18. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  19. Parallel simulation today

    Science.gov (United States)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  20. An Algorithm for Parallel Sn Sweeps on Unstructured Meshes

    International Nuclear Information System (INIS)

    Pautz, Shawn D.

    2002-01-01

    A new algorithm for performing parallel S n sweeps on unstructured meshes is developed. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with 'normal' mesh partitionings, nearly linear speedups on up to 126 processors are observed. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, no severe asymptotic degradation in the parallel efficiency is observed with modest (≤100) levels of parallelism. This result is a fundamental step in the development of efficient parallel S n methods

  1. Insights into the Hendra virus NTAIL-XD complex: Evidence for a parallel organization of the helical MoRE at the XD surface stabilized by a combination of hydrophobic and polar interactions.

    Science.gov (United States)

    Erales, Jenny; Beltrandi, Matilde; Roche, Jennifer; Maté, Maria; Longhi, Sonia

    2015-08-01

    The Hendra virus is a member of the Henipavirus genus within the Paramyxoviridae family. The nucleoprotein, which consists of a structured core and of a C-terminal intrinsically disordered domain (N(TAIL)), encapsidates the viral genome within a helical nucleocapsid. N(TAIL) partly protrudes from the surface of the nucleocapsid being thus capable of interacting with the C-terminal X domain (XD) of the viral phosphoprotein. Interaction with XD implies a molecular recognition element (MoRE) that is located within N(TAIL) residues 470-490, and that undergoes α-helical folding. The MoRE has been proposed to be embedded in the hydrophobic groove delimited by helices α2 and α3 of XD, although experimental data could not discriminate between a parallel and an antiparallel orientation of the MoRE. Previous studies also showed that if the binding interface is enriched in hydrophobic residues, charged residues located close to the interface might play a role in complex formation. Here, we targeted for site directed mutagenesis two acidic and two basic residues within XD and N(TAIL). ITC studies showed that electrostatics plays a crucial role in complex formation and pointed a parallel orientation of the MoRE as more likely. Further support for a parallel orientation was afforded by SAXS studies that made use of two chimeric constructs in which XD and the MoRE were covalently linked to each other. Altogether, these studies unveiled the multiparametric nature of the interactions established within this complex and contribute to shed light onto the molecular features of protein interfaces involving intrinsically disordered regions. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  3. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  4. Altered B Cell Homeostasis in Patients with Major Depressive Disorder and Normalization of CD5 Surface Expression on Regulatory B Cells in Treatment Responders.

    Science.gov (United States)

    Ahmetspahic, Diana; Schwarte, Kathrin; Ambrée, Oliver; Bürger, Christian; Falcone, Vladislava; Seiler, Katharina; Kooybaran, Mehrdad Rahbar; Grosse, Laura; Roos, Fernand; Scheffer, Julia; Jörgens, Silke; Koelkebeck, Katja; Dannlowski, Udo; Arolt, Volker; Scheu, Stefanie; Alferink, Judith

    2018-03-01

    Pro-inflammatory activity and cell-mediated immune responses have been widely observed in patients with major depressive disorder (MDD). Besides their well-known function as antibody-producers, B cells play a key role in inflammatory responses by secreting pro- and anti-inflammatory factors. However, homeostasis of specific B cell subsets has not been comprehensively investigated in MDD. In this study, we characterized circulating B cells of distinct developmental steps including transitional, naïve-mature, antigen-experienced switched, and non-switched memory cells, plasmablasts and regulatory B cells by multi-parameter flow cytometry. In a 6-weeks follow-up, circulating B cells were monitored in a small group of therapy responders and non-responders. Frequencies of naïve lgD + CD27 - B cells, but not lgD + CD27 + memory B cells, were reduced in severely depressed patients as compared to healthy donors (HD) or mildly to moderately depressed patients. Specifically, B cells with immune-regulatory capacities such as CD1d + CD5 + B cells and CD24 + CD38 hi transitional B cells were reduced in MDD. Also Bm1-Bm5 classification in MDD revealed reduced Bm2' cells comprising germinal center founder cells as well as transitional B cells. We further found that reduced CD5 surface expression on transitional B cells was associated with severe depression and normalized exclusively in clinical responders. This study demonstrates a compromised peripheral B cell compartment in MDD with a reduction in B cells exhibiting a regulatory phenotype. Recovery of CD5 surface expression on transitional B cells in clinical response, a molecule involved in activation and down-regulation of B cell responses, further points towards a B cell-dependent process in the pathogenesis of MDD.

  5. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  6. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  7. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  8. PCLIPS: Parallel CLIPS

    Science.gov (United States)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  9. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  10. Lipoxin A4 stimulates calcium-activated chloride currents and increases airway surface liquid height in normal and cystic fibrosis airway epithelia.

    LENUS (Irish Health Repository)

    2012-01-01

    Cystic Fibrosis (CF) is a genetic disease characterised by a deficit in epithelial Cl(-) secretion which in the lung leads to airway dehydration and a reduced Airway Surface Liquid (ASL) height. The endogenous lipoxin LXA(4) is a member of the newly identified eicosanoids playing a key role in ending the inflammatory process. Levels of LXA(4) are reported to be decreased in the airways of patients with CF. We have previously shown that in normal human bronchial epithelial cells, LXA(4) produced a rapid and transient increase in intracellular Ca(2+). We have investigated, the effect of LXA(4) on Cl(-) secretion and the functional consequences on ASL generation in bronchial epithelial cells obtained from CF and non-CF patient biopsies and in bronchial epithelial cell lines. We found that LXA(4) stimulated a rapid intracellular Ca(2+) increase in all of the different CF bronchial epithelial cells tested. In non-CF and CF bronchial epithelia, LXA(4) stimulated whole-cell Cl(-) currents which were inhibited by NPPB (calcium-activated Cl(-) channel inhibitor), BAPTA-AM (chelator of intracellular Ca(2+)) but not by CFTRinh-172 (CFTR inhibitor). We found, using confocal imaging, that LXA(4) increased the ASL height in non-CF and in CF airway bronchial epithelia. The LXA(4) effect on ASL height was sensitive to bumetanide, an inhibitor of transepithelial Cl(-) secretion. The LXA(4) stimulation of intracellular Ca(2+), whole-cell Cl(-) currents, conductances and ASL height were inhibited by Boc-2, a specific antagonist of the ALX\\/FPR2 receptor. Our results provide, for the first time, evidence for a novel role of LXA(4) in the stimulation of intracellular Ca(2+) signalling leading to Ca(2+)-activated Cl(-) secretion and enhanced ASL height in non-CF and CF bronchial epithelia.

  11. General Rotational Surfaces in Pseudo-Euclidean 4-Space with Neutral Metric

    OpenAIRE

    Aleksieva, Yana; Milousheva, Velichka; Turgay, Nurettin Cenk

    2016-01-01

    We define general rotational surfaces of elliptic and hyperbolic type in the pseudo-Euclidean 4-space with neutral metric which are analogous to the general rotational surfaces of C. Moore in the Euclidean 4-space. We study Lorentz general rotational surfaces with plane meridian curves and give the complete classification of minimal general rotational surfaces of elliptic and hyperbolic type, general rotational surfaces with parallel normalized mean curvature vector field, flat general rotati...

  12. On the interaction of a submerged turbulent jet with a clean or contaminated free surface

    Science.gov (United States)

    Anthony, Douglas G.; Hirsa, Amir; Willmarth, William W.

    1991-02-01

    The effect of a free surface on the structure of a submerged turbulent jet is investigated experimentally. Three-component LDV measurements beneath a clean free surface show that the mean flow spreads laterally outward in a shallow surface layer much wider than the mean flow well below the surface. As the free surface is approached, velocity fluctuations normal to the surface are diminished while those parallel to the surface are enhanced. Laser-induced fluorescence is used to show that the surface layer contains fluid ejected from the jet. With the addition of surface-active agents, the surface layer is suppressed.

  13. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  14. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  15. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  16. Fluid involvement in normal faulting

    Science.gov (United States)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  17. Surface aggregation of Candida albicans on glass in the absence and presence of adhering Streptococcus gordonii in a parallel-plate flow chamber : A surface thermodynamical analysis based on acid-base interactions

    NARCIS (Netherlands)

    Millsap, KW; Busscher, HJ; van der Mei, HC; Bos, R.R.M.

    1999-01-01

    Adhesive interactions between yeasts and bacteria are important in the maintenance of infectious mixed biofilms on natural and biomaterial surfaces in the human body. In this study, the extended DLVO (Derjaguin-Landau-Verwey-Overbeek) approach has been applied to explain adhesive interactions

  18. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  19. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  20. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  1. Mitotic Events in Cerebellar Granule Progenitor Cells that Expand Cerebellar Surface Area Are Critical for Normal Cerebellar Cortical Lamination in Mice

    OpenAIRE

    Chang, Joshua C.; Leung, Mark; Gokozan, Hamza Numan; Gygli, Patrick Edwin; Catacutan, Fay Patsy; Czeisler, Catherine; Otero, José Javier

    2015-01-01

    Late embryonic and postnatal cerebellar folial surface area expansion promotes cerebellar cortical cytoarchitectural lamination. We developed a streamlined sampling scheme to generate unbiased estimates of murine cerebellar surface area and volume using stereological principles. We demonstrate that during the proliferative phase of the external granule layer (EGL) and folial surface area expansion, EGL thickness does not change and thus is a topological proxy for progenitor self-renewal. The ...

  2. O método de fio quente: técnica em paralelo e técnica de superfície The hot wire method: the hot wire parallel technique and the hot wire surface technique

    Directory of Open Access Journals (Sweden)

    W. N. dos Santos

    2002-06-01

    are made when two different techniques for the transient temperature detection are employed: in one of them, the temperature is detected and recorded at the surface of the hot wire (hot wire surface technique, while in the other, the measuring point is located at a fixed distance from the hot wire (hot wire parallel technique. Experimental results show a great advantage when using the hot wire surface technique for materials with thermal conductivity higher than 10 W/mK. The time interval which is taken into account in calculations is bigger than that one that would be employed in the hot wire parallel technique in the same experimental conditions, proportioning in this case higher accuracy and reliability in the experimental results obtained.

  3. Pursuing Normality

    DEFF Research Database (Denmark)

    Madsen, Louise Sofia; Handberg, Charlotte

    2018-01-01

    BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease...

  4. Parallel time integration software

    Energy Technology Data Exchange (ETDEWEB)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  5. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  6. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  7. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  8. Interpreting sea surface slicks on the basis of the normalized radar cross-section model using RADARSAT-2 copolarization dual-channel SAR images

    Science.gov (United States)

    Ivonin, D. V.; Skrunes, S.; Brekke, C.; Ivanov, A. Yu.

    2016-03-01

    A simple automatic multipolarization technique for discrimination of main types of thin oil films (of thickness less than the radio wave skin depth) from natural ones is proposed. It is based on a new multipolarization parameter related to the ratio between the damping in the slick of specially normalized resonant and nonresonant signals calculated using the normalized radar cross-section model proposed by Kudryavtsev et al. (2003a). The technique is tested on RADARSAT-2 copolarization (VV/HH) synthetic aperture radar images of slicks of a priori known provenance (mineral oils, e.g., emulsion and crude oil, and plant oil served to model a natural slick) released during annual oil-on-water exercises in the North Sea in 2011 and 2012. It has been shown that the suggested multipolarization parameter gives new capabilities in interpreting slicks visible on synthetic aperture radar images while allowing discrimination between mineral oil and plant oil slicks.

  9. The effect of bridge exercise accompanied by the abdominal drawing-in maneuver on an unstable support surface on the lumbar stability of normal adults.

    Science.gov (United States)

    Gong, Wontae

    2015-01-01

    [Purpose] The present study sought to investigate the influence on static and dynamic lumbar stability of bridge exercise accompanied by an abdominal drawing-in maneuver (ADIM) performed on an uneven support surface. [Subjects] A total of 30 participants were divided into an experimental group (15 participants) and a control group (15 participants). [Methods] The experimental group performed bridge exercise on an unstable surface, whereas the control group performed bridge exercise on a stable surface. The respective bridge exercises were performed for 30 minutes, 3 times per week, for 6 weeks. The static lumbar stability (SLS) and dynamic lumbar stability (DLS) of both the experimental group and the control group were measured using a pressure biofeedback unit. [Results] In the comparison of the initial and final results of the experimental and control groups, only the SLS and DLS of the experimental group were found to be statistically significant. [Conclusion] The results of the present study show that when using bridge exercise to improve SLS and DLS, performing the bridge exercise accompanied by ADIM on an uneven surface is more effective than performing the exercise on a stable surface.

  10. Mitotic events in cerebellar granule progenitor cells that expand cerebellar surface area are critical for normal cerebellar cortical lamination in mice.

    Science.gov (United States)

    Chang, Joshua C; Leung, Mark; Gokozan, Hamza Numan; Gygli, Patrick Edwin; Catacutan, Fay Patsy; Czeisler, Catherine; Otero, José Javier

    2015-03-01

    Late embryonic and postnatal cerebellar folial surface area expansion promotes cerebellar cortical cytoarchitectural lamination. We developed a streamlined sampling scheme to generate unbiased estimates of murine cerebellar surface area and volume using stereologic principles. We demonstrate that, during the proliferative phase of the external granular layer (EGL) and folial surface area expansion, EGL thickness does not change and thus is a topological proxy for progenitor self-renewal. The topological constraints indicate that, during proliferative phases, migration out of the EGL is balanced by self-renewal. Progenitor self-renewal must, therefore, include mitotic events yielding 2 cells in the same layer to increase surface area (β events) and mitotic events yielding 2 cells, with 1 cell in a superficial layer and 1 cell in a deeper layer (α events). As the cerebellum grows, therefore, β events lie upstream of α events. Using a mathematical model constrained by the measurements of volume and surface area, we could quantify intermitotic times for β events on a per-cell basis in postnatal mouse cerebellum. Furthermore, we found that loss of CCNA2, which decreases EGL proliferation and secondarily induces cerebellar cortical dyslamination, shows preserved α-type events. Thus, CCNA2-null cerebellar granule progenitor cells are capable of self-renewal of the EGL stem cell niche; this is concordant with prior findings of extensive apoptosis in CCNA2-null mice. Similar methodologies may provide another layer of depth to the interpretation of results from stereologic studies.

  11. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  12. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  13. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  14. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  15. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  16. Massively parallel signature sequencing.

    Science.gov (United States)

    Zhou, Daixing; Rao, Mahendra S; Walker, Roger; Khrebtukova, Irina; Haudenschild, Christian D; Miura, Takumi; Decola, Shannon; Vermaas, Eric; Moon, Keith; Vasicek, Thomas J

    2006-01-01

    Massively parallel signature sequencing is an ultra-high throughput sequencing technology. It can simultaneously sequence millions of sequence tags, and, therefore, is ideal for whole genome analysis. When applied to expression profiling, it reveals almost every transcript in the sample and provides its accurate expression level. This chapter describes the technology and its application in establishing stem cell transcriptome databases.

  17. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  18. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  19. Parallel Splash Belief Propagation

    Science.gov (United States)

    2010-08-01

    Service, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of...heaps: An alternative to Fibonacci heaps with applications to parallel computation. Communications of the ACM, 31:1343–1354, 1988. G. Elidan, I. Mcgraw

  20. Time-dependent transport of a localized surface plasmon through a linear array of metal nanoparticles: Precursor and normal mode contributions

    Science.gov (United States)

    Compaijen, P. J.; Malyshev, V. A.; Knoester, J.

    2018-02-01

    We theoretically investigate the time-dependent transport of a localized surface plasmon excitation through a linear array of identical and equidistantly spaced metal nanoparticles. Two different signals propagating through the array are found: one traveling with the group velocity of the surface plasmon polaritons of the system and damped exponentially, and the other running with the speed of light and decaying in a power-law fashion, as x-1 and x-2 for the transversal and longitudinal polarizations, respectively. The latter resembles the Sommerfeld-Brillouin forerunner and has not been identified in previous studies. The contribution of this signal dominates the plasmon transport at large distances. In addition, even though this signal is spread in the propagation direction and has the lateral dimension larger than the wavelength, the field profile close to the chain axis does not change with distance, indicating that this part of the signal is confined to the array.

  1. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  2. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  3. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  4. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  5. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  6. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    -parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2 °C and 3 °C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4 °C compared to the wild type oligonucleotide......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...... chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability decreased...

  7. Transmission line theory for long plasma production by radio frequency discharges between parallel-plate electrodes

    International Nuclear Information System (INIS)

    Nonaka, S.

    1991-01-01

    In order to seek for a radio frequency (RF) eigen-mode of waves in producing a plasma between a pair of long dielectric-covered parallel-plate RF electrodes, this paper analyzed all normal modes propagating along the electrodes by solving Maxwell's equations. The result showed that only an odd surface wave mode will produce the plasma in usual experimental conditions, which will become a basic transmission line theory when use of such long electrodes for on-line mass-production of amorphous silicon solar cells

  8. Defectively N-glycosylated and non-O-glycosylated aminopeptidase N (CD13) is normally expressed at the cell surface and has full enzymatic activity

    DEFF Research Database (Denmark)

    Norén, K; Hansen, Gert Helge; Clausen, H

    1997-01-01

    In order to study the effects of the absence of O-glycosylation and modifications of N-glycosylation on a class II membrane protein, pig and human aminopeptidase N (CD13) were stably expressed in the ldl(D) cell line. This cell line carries a UDP-Gal/UDP-GalNAc-epimerase deficiency which blocks...... of the glycoprotein aminopeptidase N can be synthesized and the effects of altered glycosylation can be studied. It is demonstrated that aminopeptidase N carries "mucin-type" O-glycans and that this is predominantly located in the stalk, which connects the catalytic headgroup to the membrane anchor. Normally...... glycosylated aminopeptidase N is present in the plasma membrane of the ldl(D) cells. This is also the case for the non-O-glycosylated and defectively N-glycosylated forms. This is in line with the finding that the intracellular transport APN is unaffected by the absence of O-glycosylation or by changes in N...

  9. Assessing of organic content in surface sediments of Suez Gulf, Egypt depending on normal alkanes, terpanes and steranes biological markers indicators

    Directory of Open Access Journals (Sweden)

    Abedel Aziz Elfadly

    2017-12-01

    Full Text Available The Semi-enclosed Suez Gulf records various signals of high anthropic pressures from surrounding regions and the industrialized Suez countries. The sedimentary hydrocarbons have been studied in 6 coastal stations located in the Gulf of Suez. Non-aromatic hydrocarbons were analyzed by GC/FID and GC/MS to assess organic content in surface sediments of Suez Gulf, Egypt depending on alkanes, terpanes and steranes biological markers indicators. The results showed that the hydrocarbons are originated from multiple terrestrial inputs, biogenic, pyrolytic. Several ratios of hydrocarbons indicated the predominance of petrogenic in combination with biogenic hydrocarbons. Al-Attaqa harbor, Suez oil processing company, Al-Nasr Oil Company, AL-Kabanon and EL-Sukhna of Loloha Beach are the main sources of petroleum contamination.

  10. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  11. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  12. Very Large Parallel Data Flow

    Science.gov (United States)

    1988-03-01

    billion characters. Teradata Corporation’s DBC/1012, a parallel relational database machine, is another high performance engine for large database...the volume of data being processed. Such parallelism is currently exploited in multiproces- sor relational database machines such as the Teradata DBC...scale parallelism can be achieved in all-solutions relations using or-parallelism in a multiprocessor architecture. For instance, the Teradata database

  13. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  14. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  15. Integrating Task and Data Parallelism

    OpenAIRE

    Massingill, Berna

    1993-01-01

    Many models of concurrency and concurrent programming have been proposed; most can be categorized as either task-parallel (based on functional decomposition) or data-parallel (based on data decomposition). Task-parallel models are most effective for expressing irregular computations; data-parallel models are most effective for expressing regular computations. Some computations, however, exhibit both regular and irregular aspects. For such computations, a better programming model is one that i...

  16. Parallel Pascal - An extended Pascal for parallel computers

    Science.gov (United States)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  17. Massively Parallel Genetics.

    Science.gov (United States)

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  18. Parallel sphere rendering

    Energy Technology Data Exchange (ETDEWEB)

    Krogh, M.; Hansen, C.; Painter, J. [Los Alamos National Lab., NM (United States); de Verdiere, G.C. [CEA Centre d`Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  19. Parallel Repetition From Fortification

    OpenAIRE

    Moshkovitz Aaronson, Dana Hadar

    2014-01-01

    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from...

  20. Parallelized direct execution simulation of message-passing parallel programs

    Science.gov (United States)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  1. Relationship between mean body surface temperature measured by use of infrared thermography and ambient temperature in clinically normal pigs and pigs inoculated with Actinobacillus pleuropneumoniae.

    Science.gov (United States)

    Loughmiller, J A; Spire, M F; Dritz, S S; Fenwick, B W; Hosni, M H; Hogge, S B

    2001-05-01

    To determine the relationship between ambient temperature and mean body surface temperature (MBST) measured by use of infrared thermography (IRT) and to evaluate the ability of IRT to detect febrile responses in pigs following inoculation with Actinobacillus pleuropneumoniae. 28 crossbred barrows. Pigs (n = 4) were subjected to ambient temperatures ranging from 10 to 32 C in an environmental chamber. Infrared thermographs were obtained, and regression analysis was used to determine the relationship between ambient temperature and MBST. The remaining pigs were assigned to groups in an unbalanced randomized complete block design (6 A pleuropneumoniae-inoculated febrile pigs [increase in rectal temperature > or = 1.67 C], 6 A pleuropneumoniae-inoculated nonfebrile pigs [increase in rectal temperature temperatures were obtained for the period from 2 hours before to 18 hours after inoculation, and results were analyzed by use of repeated-measures ANOVA. A significant linear relationship was observed between ambient temperature and MBST (slope, 0.40 C). For inoculated febrile pigs, a treatment X method interaction was evident for rectal temperature and MBST, whereas inoculated nonfebrile pigs only had increased rectal temperatures, compared with noninoculated pigs. A method X time interaction resulted from the longer interval after inoculation until detection of an increase in MBST by use of IRT. Infrared thermography can be adjusted to account for ambient temperature and used to detect changes in MBST and radiant heat production attributable to a febrile response in pigs.

  2. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  3. Fast parallel event reconstruction

    CERN Document Server

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  4. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  5. Massively Parallel QCD

    Energy Technology Data Exchange (ETDEWEB)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  6. Parallel Monte Carlo Simulation of Aerosol Dynamics

    Directory of Open Access Journals (Sweden)

    Kun Zhou

    2014-02-01

    Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.

  7. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  8. Massively parallel fabrication of repetitive nanostructures: nanolithography for nanoarrays

    International Nuclear Information System (INIS)

    Luttge, Regina

    2009-01-01

    This topical review provides an overview of nanolithographic techniques for nanoarrays. Using patterning techniques such as lithography, normally we aim for a higher order architecture similarly to functional systems in nature. Inspired by the wealth of complexity in nature, these architectures are translated into technical devices, for example, found in integrated circuitry or other systems in which structural elements work as discrete building blocks in microdevices. Ordered artificial nanostructures (arrays of pillars, holes and wires) have shown particular properties and bring about the opportunity to modify and tune the device operation. Moreover, these nanostructures deliver new applications, for example, the nanoscale control of spin direction within a nanomagnet. Subsequently, we can look for applications where this unique property of the smallest manufactured element is repetitively used such as, for example with respect to spin, in nanopatterned magnetic media for data storage. These nanostructures are generally called nanoarrays. Most of these applications require massively parallel produced nanopatterns which can be directly realized by laser interference (areas up to 4 cm 2 are easily achieved with a Lloyd's mirror set-up). In this topical review we will further highlight the application of laser interference as a tool for nanofabrication, its limitations and ultimate advantages towards a variety of devices including nanostructuring for photonic crystal devices, high resolution patterned media and surface modifications of medical implants. The unique properties of nanostructured surfaces have also found applications in biomedical nanoarrays used either for diagnostic or functional assays including catalytic reactions on chip. Bio-inspired templated nanoarrays will be presented in perspective to other massively parallel nanolithography techniques currently discussed in the scientific literature. (topical review)

  9. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  10. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. SUSTAINED HYPERLIPEMIA INDUCED IN RABBITS BY MEANS OF INTRAVENOUSLY INJECTED SURFACE-ACTIVE AGENTS

    Science.gov (United States)

    Kellner, Aaron; Correll, James W.; Ladd, Anthony T.

    1951-01-01

    The intravenous injection of the surface-active agents Tween 80 and Triton A20 into rabbits fed a normal diet resulted in marked and sustained elevations of the cholesterol, phospholipid, and total lipid content of their blood. The increase in phospholipid in general paralleled that of the blood cholesterol. The implications of the findings are briefly discussed. PMID:14824409

  12. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  13. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  14. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...

  15. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  16. Virtual earthquake engineering laboratory with physics-based degrading materials on parallel computers

    Science.gov (United States)

    Cho, In Ho

    For the last few decades, we have obtained tremendous insight into underlying microscopic mechanisms of degrading quasi-brittle materials from persistent and near-saintly efforts in laboratories, and at the same time we have seen unprecedented evolution in computational technology such as massively parallel computers. Thus, time is ripe to embark on a novel approach to settle unanswered questions, especially for the earthquake engineering community, by harmoniously combining the microphysics mechanisms with advanced parallel computing technology. To begin with, it should be stressed that we placed a great deal of emphasis on preserving clear meaning and physical counterparts of all the microscopic material models proposed herein, since it is directly tied to the belief that by doing so, the more physical mechanisms we incorporate, the better prediction we can obtain. We departed from reviewing representative microscopic analysis methodologies, selecting out "fixed-type" multidirectional smeared crack model as the base framework for nonlinear quasi-brittle materials, since it is widely believed to best retain the physical nature of actual cracks. Microscopic stress functions are proposed by integrating well-received existing models to update normal stresses on the crack surfaces (three orthogonal surfaces are allowed to initiate herein) under cyclic loading. Unlike the normal stress update, special attention had to be paid to the shear stress update on the crack surfaces, due primarily to the well-known pathological nature of the fixed-type smeared crack model---spurious large stress transfer over the open crack under nonproportional loading. In hopes of exploiting physical mechanism to resolve this deleterious nature of the fixed crack model, a tribology-inspired three-dimensional (3d) interlocking mechanism has been proposed. Following the main trend of tribology (i.e., the science and engineering of interacting surfaces), we introduced the base fabric of solid

  17. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  18. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  19. MRI of normal achilles tendon

    Energy Technology Data Exchange (ETDEWEB)

    Rollandi, G.A. [Institute of Radiology, Univ. of Genoa (Italy); Bertolotto, M. [Institute of Radiology, Univ. of Genoa (Italy); Perrone, R. [Institute of Radiology, Univ. of Genoa (Italy); Garlaschi, G. [Institute of Radiology, Univ. of Genoa (Italy); Derchi, L.E. [Institute of Radiology, Univ. of Genoa (Italy)

    1995-12-01

    To investigate the normal internal structure of tendons 11 volunteers without clinical evidence of tendinopathy were examined using conventional spin-echo T1-, T2- and proton-density weighted sequences. The Achilles tendon was chosen because of its high frequency of injury in athletic activity, large size, superficial position and because it is oriented nearly parallel to the static magnetic field, therefore minimizing the ``magic angle phenomenon``. The tendons exhibited areas of slighly increased signal in four T1-weighted and in all but one proton-density-weighted scans. No intratendinous signal was detected in T2-weighted images. The possible origin of these findings is discussed. We conclude that the knowledge of these normal signals may be useful to avoid incorrectly diagnosing as pathological. (orig.). With 2 figs.

  20. Parallelizing Monte Carlo with PMC

    International Nuclear Information System (INIS)

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described

  1. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context...

  2. Parallel pseudospectral domain decomposition techniques

    Science.gov (United States)

    Gottlieb, David; Hirsch, Richard S.

    1989-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  3. Power stability methods for parallel systems

    International Nuclear Information System (INIS)

    Wallach, Y.

    1988-01-01

    Parallel-Processing Systems are already commercially available. This paper shows that if one of them - the Alternating Sequential Parallel, or ASP system - is applied to network stability calculations it will lead to a higher speed of solution. The ASP system is first described and is then shown to be cheaper, more reliable and available than other parallel systems. Also, no deadlock need be feared and the speedup is normally very high. A number of ASP systems were already assembled (the SMS systems, Topps, DIRMU etc.). At present, an IBM Local Area Network is being modified so that it too can work in the ASP mode. Existing ASP systems were programmed in Fortran or assembly language. Since newer systems (e.g. DIRMU) are programmed in Modula-2, this language can be used. Stability analysis is based on solving nonlinear differential and algebraic equations. The algorithm for solving the nonlinear differential equations on ASP, is described and programmed in Modula-2. The speedup is computed and is shown to be almost optimal

  4. Normal shoulder: MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kieft, G.J.; Bloem, J.L.; Obermann, W.R.; Verbout, A.J.; Rozing, P.M.; Doornbos, J.

    1986-06-01

    Relatively poor spatial resolution has been obtained in magnetic resonance (MR) imaging of the shoulder because the shoulder can only be placed in the periphery of the magnetic field. The authors have devised an anatomically shaped surface coil that enables MR to demonstrate normal shoulder anatomy in different planes with high spatial resolution. In the axial plane anatomy analogous to that seen on computed tomographic (CT) scans can be demonstrated. Variations in scapular position (produced by patient positioning) may make reproducibility of sagittal and coronal plane images difficult by changing the relationship of the plane to the shoulder anatomy. Oblique planes, for which the angle is chosen from the axial image, have the advantage of easy reproducibility. Obliquely oriented structures and relationships are best seen in oblique plane images and can be evaluated in detail.

  5. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  6. Template based parallel checkpointing in a massively parallel computer system

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  7. Testing for normality

    CERN Document Server

    Thode, Henry C

    2002-01-01

    Describes the selection, design, theory, and application of tests for normality. Covers robust estimation, test power, and univariate and multivariate normality. Contains tests ofr multivariate normality and coordinate-dependent and invariant approaches.

  8. Recent progress in 3D EM/EM-PIC simulation with ARGUS and parallel ARGUS

    International Nuclear Information System (INIS)

    Mankofsky, A.; Petillo, J.; Krueger, W.; Mondelli, A.; McNamara, B.; Philp, R.

    1994-01-01

    ARGUS is an integrated, 3-D, volumetric simulation model for systems involving electric and magnetic fields and charged particles, including materials embedded in the simulation region. The code offers the capability to carry out time domain and frequency domain electromagnetic simulations of complex physical systems. ARGUS offers a boolean solid model structure input capability that can include essentially arbitrary structures on the computational domain, and a modular architecture that allows multiple physics packages to access the same data structure and to share common code utilities. Physics modules are in place to compute electrostatic and electromagnetic fields, the normal modes of RF structures, and self-consistent particle-in-cell (PIC) simulation in either a time dependent mode or a steady state mode. The PIC modules include multiple particle species, the Lorentz equations of motion, and algorithms for the creation of particles by emission from material surfaces, injection onto the grid, and ionization. In this paper, we present an updated overview of ARGUS, with particular emphasis given in recent algorithmic and computational advances. These include a completely rewritten frequency domain solver which efficiently treats lossy materials and periodic structures, a parallel version of ARGUS with support for both shared memory parallel vector (i.e. CRAY) machines and distributed memory massively parallel MIMD systems, and numerous new applications of the code

  9. Porting, parallelization and performance evaluation experiences with massively parallel supercomputing system based on transputer

    Energy Technology Data Exchange (ETDEWEB)

    Fruscione, M.; Stofella, P.; Cleri, F.; Mazzeo, M.; Ornelli, P.; Schiano, P.

    1991-02-01

    This paper decribes the most important aspects and results obtained from the porting and parallelization of two programs, VPMC and EULERO, on a Meiko multiprocessor `Computing Surface` system. The VPMC program was developed by ENEA (the Italian Agency for Energy, New Technologies and the Environment) to simulate travelling electrons. EULERO is a fluid dynamics simulation program, owned by CIRA (Centro Italiano di Ricerche Aereospaziali) which uses it for its aereo space components projects. This report gives short descriptions of the two programs and their parallelization methodologies, and provides a performance evaluation of the Meiko `Computing Surface` system. Moreover, these performance data are compared with corresponding data obtained with IBM 3090, CRAY and other computers by ENEA and CIRA in their research and development activities.

  10. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  11. Arc parallel extension in Higher and Lesser Himalayas, evidence ...

    Indian Academy of Sciences (India)

    They are represented by arcperpendicular normal faults and arc-parallel sinistral strike-slip faults. We discuss ... The partitioning of stress due to oblique convergence is argued based on evidences of left-lateral slip in NEHimalaya, right-lateral slip in NW-Himalaya and absence of translation in the central part. The amount of ...

  12. A parallel implementation of the ghost-cell immersed boundary ...

    Indian Academy of Sciences (India)

    A modified version of the previously reported ghost-cell immersed boundary method is implemented in parallel environment based on distributed memory allocation. Reconstruction of the flow variables is carried out by the inverse distance weighting technique. Implementation of the normal pressure gradient on the ...

  13. The Normal Distribution From Binomial to Normal

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 6. The Normal Distribution From Binomial to Normal. S Ramasubramanian. Series Article Volume 2 Issue 6 June 1997 pp 15-24. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/06/0015-0024 ...

  14. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  15. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  16. Metal structures with parallel pores

    Science.gov (United States)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  17. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  18. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    The underlying neural mechanisms of a perceptual bias for in-phase bimanual coordination movements are not well understood. In the present study, we measured brain activity with functional magnetic resonance imaging in healthy subjects during a task, where subjects performed bimanual index finger...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...

  19. Parallel Processing and Scientific Applications

    Science.gov (United States)

    1992-11-30

    the algorithm have computational complexity O(N) and be scalable. Multigrid Methods Among the known scalable algorith.ms for elliptic PDE solution, the...close to linear in N and scales linearly in P. However multigrid methods have a particular difficulty on parallel machines in that they do not have...This inherent disadvantage of multigrid methods has led to the search for truly parallel MG methods - multigrid methods that can utilize all processors

  20. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  1. A Survey of Parallel A*

    OpenAIRE

    Fukunaga, Alex; Botea, Adi; Jinnai, Yuu; Kishimoto, Akihiro

    2017-01-01

    A* is a best-first search algorithm for finding optimal-cost paths in graphs. A* benefits significantly from parallelism because in many applications, A* is limited by memory usage, so distributed memory implementations of A* that use all of the aggregate memory on the cluster enable problems that can not be solved by serial, single-machine implementations to be solved. We survey approaches to parallel A*, focusing on decentralized approaches to A* which partition the state space among proces...

  2. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  3. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  4. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  5. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  6. A high performance parallel approach to medical imaging

    International Nuclear Information System (INIS)

    Frieder, G.; Frieder, O.; Stytz, M.R.

    1988-01-01

    Research into medical imaging using general purpose parallel processing architectures is described and a review of the performance of previous medical imaging machines is provided. Results demonstrating that general purpose parallel architectures can achieve performance comparable to other, specialized, medical imaging machine architectures is presented. A new back-to-front hidden-surface removal algorithm is described. Results demonstrating the computational savings obtained by using the modified back-to-front hidden-surface removal algorithm are presented. Performance figures for forming a full-scale medical image on a mesh interconnected multiprocessor are presented

  7. Occurrence of perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) in N.E. Spanish surface waters and their removal in a drinking water treatment plant that combines conventional and advanced treatments in parallel lines.

    Science.gov (United States)

    Flores, Cintia; Ventura, Francesc; Martin-Alonso, Jordi; Caixach, Josep

    2013-09-01

    Perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) are two emerging contaminants that have been detected in all environmental compartments. However, while most of the studies in the literature deal with their presence or removal in wastewater treatment, few of them are devoted to their detection in treated drinking water and fate during drinking water treatment. In this study, analyses of PFOS and PFOA have been carried out in river water samples and in the different stages of a drinking water treatment plant (DWTP) which has recently improved its conventional treatment process by adding ultrafiltration and reverse osmosis in a parallel treatment line. Conventional and advanced treatments have been studied in several pilot plants and in the DWTP, which offers the opportunity to compare both treatments operating simultaneously. From the results obtained, neither preoxidation, sand filtration, nor ozonation, removed both perfluorinated compounds. As advanced treatments, reverse osmosis has proved more effective than reverse electrodialysis to remove PFOA and PFOS in the different configurations of pilot plants assayed. Granular activated carbon with an average elimination efficiency of 64±11% and 45±19% for PFOS and PFOA, respectively and especially reverse osmosis, which was able to remove ≥99% of both compounds, were the sole effective treatment steps. Trace levels of PFOS (3.0-21 ng/L) and PFOA (water were significantly lowered in comparison to those measured in precedent years. These concentrations represent overall removal efficiencies of 89±22% for PFOA and 86±7% for PFOS. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  9. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  10. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  11. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  12. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  13. Parallelizing Timed Petri Net simulations

    Science.gov (United States)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  14. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  15. Kinematic Analysis of a 3-dof Parallel Machine Tool with Large Workspace

    Directory of Open Access Journals (Sweden)

    Shi Yan

    2016-01-01

    Full Text Available Kinematics of a 3-dof (degree of freedom parallel machine tool with large workspace was analyzed. The workspace volume and surface and boundary posture angles of the 3-dof parallel machine tool are relatively large. Firstly, three dimensional simulation manipulator of the 3-dof parallel machine tool was constructed, and its joint distribution was described. Secondly, kinematic models of the 3-dof parallel machine tool were fixed on, including displacement analysis, velocity analysis, and acceleration analysis. Finally, the kinematic models of the machine tool were verified by a numerical example. The study result has an important significance to the application of the parallel machine tool.

  16. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  17. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  18. Coupling of surface relaxation and polarization in PbTiO{sub 3} from atomistic simulation

    Energy Technology Data Exchange (ETDEWEB)

    Behera, R K; Sinnott, S B; Phillpot, S R [Department of Materials Science and Engineering, University of Florida, Gainesville, FL 32611 (United States); Hinojosa, B B; Asthagiri, A [Department of Chemical Engineering, University of Florida, Gainesville, FL 32611 (United States)], E-mail: sphil@mse.ufl.edu

    2008-10-01

    Molecular dynamics simulations are used to characterize ferroelectricity on the (001) surfaces of PbTiO{sub 3} (PT), one of the most widely studied ferroelectric materials. Two different empirical interatomic shell model potentials are used. Both PbO and TiO{sub 2} surface terminations in PT under open circuit electrical boundary conditions are characterized. The results are found to be in good agreement with the results of density functional theory calculations. The atomic relaxations, interlayer spacings and surface rumplings of each of the four possible surface terminations are analyzed. The deviation of the polarization from the bulk value is observed to be larger when the polarization points out of the surface than when it points into the surface. Analysis of the surface energies for free-standing films shows that polarization parallel to the surface is energetically more favorable than the polarization normal to the surfaces.

  19. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  20. Parallel FFT using Eden Skeletons

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg

    2009-01-01

    The paper investigates and compares skeleton-based Eden implementations of different FFT-algorithms on workstation clusters with distributed memory. Our experiments show that the basic divide-and-conquer versions suffer from an inherent input distribution and result collection problem. Advanced...... approaches like calculating FFT using a parallel map-and-transpose skeleton provide more flexibility to overcome these problems. Assuming a distributed access to input data and re-organising computation to return results in a distributed way improves the parallel runtime behaviour....

  1. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  2. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  3. Surface spectra of Weyl semimetals through self-adjoint extensions

    Science.gov (United States)

    Seradjeh, Babak; Vennettilli, Michael

    2018-02-01

    We apply the method of self-adjoint extensions of Hermitian operators to the low-energy, continuum Hamiltonians of Weyl semimetals in bounded geometries and derive the spectrum of the surface states on the boundary. This allows for the full characterization of boundary conditions and the surface spectra on surfaces both normal to the Weyl node separation as well as parallel to it. We show that the boundary conditions for quadratic bulk dispersions are, in general, specified by a U (2 ) matrix relating the wave function and its derivatives normal to the surface. We give a general procedure to obtain the surface spectra from these boundary conditions and derive them in specific cases of bulk dispersion. We consider the role of global symmetries in the boundary conditions and their effect on the surface spectrum. We point out several interesting features of the surface spectra for different choices of boundary conditions, such as a Mexican-hat shaped dispersion on the surface normal to Weyl node separation. We find that the existence of bound states, Fermi arcs, and the shape of their dispersion, depend on the choice of boundary conditions. This illustrates the importance of the physics at and near the boundaries in the general statement of bulk-boundary correspondence.

  4. Surface energy and surface stress on vicinals by revisiting the Shuttleworth relation

    Science.gov (United States)

    Hecquet, Pascal

    2018-04-01

    In 1998 [Surf. Sci. 412/413, 639 (1998)], we showed that the step stress on vicinals varies as 1/L, L being the distance between steps, while the inter-step interaction energy primarily follows the law as 1/L2 from the well-known Marchenko-Parshin model. In this paper, we give a better understanding of the interaction term of the step stress. The step stress is calculated with respect to the nominal surface stress. Consequently, we calculate the diagonal surface stresses in both the vicinal system (x, y, z) where z is normal to the vicinal and the projected system (x, b, c) where b is normal to the nominal terrace. Moreover, we calculate the surface stresses by using two methods: the first called the 'Zero' method, from the surface pressure forces and the second called the 'One' method, by homogeneously deforming the vicinal in the parallel direction, x or y, and by calculating the surface energy excess proportional to the deformation. By using the 'One' method on the vicinal Cu(0 1 M), we find that the step deformations, due to the applied deformation, vary as 1/L by the same factor for the tensor directions bb and cb, and by twice the same factor for the parallel direction yy. Due to the vanishing of the surface stress normal to the vicinal, the variation of the step stress in the direction yy is better described by using only the step deformation in the same direction. We revisit the Shuttleworth formula, for while the variation of the step stress in the direction xx is the same between the two methods, the variation in the direction yy is higher by 76% for the 'Zero' method with respect to the 'One' method. In addition to the step energy, we confirm that the variation of the step stress must be taken into account for the understanding of the equilibrium of vicinals when they are not deformed.

  5. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  6. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  7. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  8. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  9. Elongation Cutoff Technique: Parallel Performance

    Directory of Open Access Journals (Sweden)

    Jacek Korchowiec

    2008-01-01

    Full Text Available It is demonstrated that the elongation cutoff technique (ECT substantially speeds up thequantum-chemical calculation at Hartree-Fock (HF level of theory and is especially wellsuited for parallel performance. A comparison of ECT timings for water chains with thereference HF calculations is given. The analysis includes the overall CPU (central processingunit time and its most time consuming steps.

  10. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  11. Lightweight Specifications for Parallel Correctness

    Science.gov (United States)

    2012-12-05

    that typically no pro- grammer specification is needed. However, manually determining which reported races are benign and which are bugs can be time...marked by the pro- grammer , but no statement is enclosed with if(true∗) — i.e., the set S∗ is empty. We are given a set T of parallel execution traces, and

  12. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  13. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  14. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  15. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  16. The impact of airwave on tangential and normal components of electric field in seabed logging data

    Science.gov (United States)

    Rostami, Amir; Soleimani, Hassan; Yahya, Noorhana; Nyamasvisva, Tadiwa Elisha; Rauf, Muhammad

    2016-11-01

    Seabed Logging (SBL), is a recently used application of Controlled Source Electromagnetic (CSEM) method based on study on resistivity of layers beneath seafloor, to delineate marine hydrocarbon reservoir. In this method, an ultra-low frequency electromagnetic (EM) wave is emitted by an electric straight dipole which moves parallel to the seabed. Following Maxwell's equations, reflected and refracted waves from different layers are recorded by receiver line which is laying on the sea floor to define the contrast between amplitude and phase of responding waves of bearing oil reservoir and surrounding host rocks. The main concern of the current work is to study behavior of airwave, which is propagated wave in the seawater area, guided by sea surface and refracted back to the receiver line, and its impact on tangential and normal components of received electric field amplitude. Will be reported that the most significant part of tangential component is airwave, while it does not affect normal component of received electric field, remarkably.

  17. Parallelization method for three dimensional MOC calculation

    International Nuclear Information System (INIS)

    Zhang Zhizhu; Li Qing; Wang Kan

    2013-01-01

    A parallelization method based on angular decomposition for the three dimensional MOC was designed. To improve the parallel efficiency, the directions were pre-grouped and the groups were assembled to minimize the communication. The improved parallelization method was applied to the three dimensional MOC code TCM. The numerical results show that the calculation results of parallelization method are agreed with serial calculation results. The parallel efficiency gets obvious increase after the communication optimized and load balance. (authors)

  18. Effect of parallel electric fields on the whistler mode wave propagation in the magnetosphere

    International Nuclear Information System (INIS)

    Gupta, G.P.; Singh, R.N.

    1975-01-01

    The effect of parallel electric fields on whistler mode wave propagation has been studied. To account for the parallel electric fields, the dispersion equation has been analyzed, and refractive index surfaces for magnetospheric plasma have been constructed. The presence of parallel electric fields deforms the refractive index surfaces which diffuse the energy flow and produce defocusing of the whistler mode waves. The parallel electric field induces an instability in the whistler mode waves propagating through the magnetosphere. The growth or decay of whistler mode instability depends on the direction of parallel electric fields. It is concluded that the analyses of whistler wave records received on the ground should account for the role of parallel electric fields

  19. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  20. Parallel Materialization of Large ABoxes.

    Science.gov (United States)

    Narayanan, Sivaramakrishnan; Catalyurek, Umit; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    This paper is concerned with the efficient computation of materialization in a knowledge base with a large ABox. We present a framework for performing this task on a shared-nothing parallel machine. The framework partitions TBox and ABox axioms using a min-min strategy. It utilizes an existing system, like SwiftOWLIM, to perform local inference computations and coordinates exchange of relevant information between processors. Our approach is able to exploit parallelism in the axioms of the TBox to achieve speedup in a cluster. However, this approach is limited by the complexity of the TBox. We present an experimental evaluation of the framework using datasets from the Lehigh University Benchmark (LUBM).

  1. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  2. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy......, and the limited memory in these architectures, severely constrains the data sets that can be processed. Moreover, the language-integrated cost semantics for nested data parallelism pioneered by NESL depends on a parallelism-flattening execution strategy that only exacerbates the problem. This is because...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...

  3. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  4. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  5. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  6. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  7. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  8. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  9. Optimising a parallel conjugate gradient solver

    Energy Technology Data Exchange (ETDEWEB)

    Field, M.R. [O`Reilly Institute, Dublin (Ireland)

    1996-12-31

    This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

  10. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  11. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  12. Corners of normal matrices

    Indian Academy of Sciences (India)

    The structure of general normal matrices is far more complicated than that of two special kinds — hermitian and unitary. There are many interesting theorems for hermitian and unitary matrices whose extensions to arbitrary normal matrices have proved to be extremely recalcitrant (see e.g., [1]). The problem whose study we ...

  13. Normalized medical information visualization.

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Somolinos, Roberto; Castro, Antonio; Velázquez, Iker; Moreno, Oscar; García-Pacheco, José L; Pascual, Mario; Salvador, Carlos H

    2015-01-01

    A new mark-up programming language is introduced in order to facilitate and improve the visualization of ISO/EN 13606 dual model-based normalized medical information. This is the first time that visualization of normalized medical information is addressed and the programming language is intended to be used by medical non-IT professionals.

  14. Baby Poop: What's Normal?

    Science.gov (United States)

    ... I'm breast-feeding my newborn and her bowel movements are yellow and mushy. Is this normal for baby poop? Answers from Jay L. Hoecker, M.D. Yellow, mushy bowel movements are perfectly normal for breast-fed babies. Still, ...

  15. Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces

    International Nuclear Information System (INIS)

    Fu, Yu

    2013-01-01

    In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces

  16. Making nuclear 'normal'

    International Nuclear Information System (INIS)

    Haehlen, Peter; Elmiger, Bruno

    2000-01-01

    The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be

  17. Suppressing correlations in massively parallel simulations of lattice models

    Science.gov (United States)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  18. Testing the limits of the Maxwell distribution of velocities for atoms flying nearly parallel to the walls of a thin cell

    Science.gov (United States)

    Todorov, Petko; Bloch, Daniel

    2017-11-01

    For a gas at thermal equilibrium, it is usually assumed that the velocity distribution follows an isotropic 3-dimensional Maxwell-Boltzmann (M-B) law. This assumption classically implies the assumption of a "cos θ" law for the flux of atoms leaving the surface. Actually, such a law has no grounds in surface physics, and experimental tests of this assumption have remained very few. In a variety of recently developed sub-Doppler laser spectroscopy techniques for gases one-dimensionally confined in a thin cell, the specific contribution of atoms moving nearly parallel to the boundary of the vapor container becomes essential. We report here on the implementation of an experiment to probe effectively the distribution of atomic velocities parallel to the windows for a thin (60 μm) Cs vapor cell. The principle of the setup relies on a spatially separated pump-probe experiment, where the variations of the signal amplitude with the pump-probe separation provide the information on the velocity distribution. The experiment is performed in a sapphire cell on the Cs resonance line, which benefits from a long-lived hyperfine optical pumping. Presently, we can analyze specifically the density of atoms with slow normal velocities ˜5-20 m/s, already corresponding to unusual grazing flight—at ˜85°-88.5° from the normal to the surface—and no deviation from the M-B law is found within the limits of our elementary setup. Finally we suggest tracks to explore more parallel velocities, when surface details—roughness or structure—and the atom-surface interaction should play a key role to restrict the applicability of an M-B-type distribution.

  19. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  20. Practical Parallel Divide-and-Conquer Algorithms

    National Research Council Canada - National Science Library

    Hardwick, Jonathan

    1997-01-01

    .... This thesis shows that by restricting the problem set to that of data-parallel divide and conquer algorithms I can maintain the expressibility of full nested data-parallel languages while achieving...

  1. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  2. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  3. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  4. Parallel BLAST on split databases.

    Science.gov (United States)

    Mathog, David R

    2003-09-22

    BLAST programs often run on large SMP machines where multiple threads can work simultaneously and there is enough memory to cache the databases between program runs. A group of programs is described which allows comparable performance to be achieved with a Beowulf configuration in which no node has enough memory to cache a database but the cluster as an aggregate does. To achieve this result, databases are split into equal sized pieces and stored locally on each node. Each query is run on all nodes in parallel and the resultant BLAST output files from all nodes merged to yield the final output. Source code is available from ftp://saf.bio.caltech.edu/

  5. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  6. Parallel Compilation of CMS Software

    CERN Document Server

    Ashby, Shaun; Schmid, Stefan; Tuura, Lassi A

    2005-01-01

    LHC experiments have large amounts of software to build. CMS has studied ways to shorten project build times using parallel and distributed builds as well as improved ways to decide what to rebuild. We have experimented with making idle desktop and server machines easily available as a virtual build cluster using distcc and zeroconf. We have also tested variations of ccache and more traditional make dependency analysis. We report on our test results, with analysis of the factors that most improve or limit build performance.

  7. The PARTY parallel runtime system

    Science.gov (United States)

    Saltz, J. H.; Mirchandaney, Ravi; Smith, R. M.; Crowley, Kay; Nicol, D. M.

    1989-01-01

    In the present automated system for the organization of the data and computational operations entailed by parallel problems, in ways that optimize multiprocessor performance, general heuristics for partitioning program data and control are implemented by capturing and manipulating representations of a computation at run time. These heuristics are directed toward the dynamic identification and allocation of concurrent work in computations with irregular computational patterns. An optimized static-workload partitioning is computed for such repetitive-computation pattern problems as the iterative ones employed in scientific computation.

  8. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  9. Parallel Processing at the High School Level.

    Science.gov (United States)

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  10. A parallel gravitational N-body kernel

    NARCIS (Netherlands)

    Portegies Zwart, S.; McMillan, S.; Groen, D.; Gualandris, A.; Sipior, M.; Vermin, W.

    2008-01-01

    We describe source code level parallelization for the kira direct gravitational N-body integrator, the workhorse of the starlab production environment for simulating dense stellar systems. The parallelization strategy, called "j-parallelization", involves the partition of the computational domain by

  11. Comparison of Parallel Viscosity with Neoclassical Theory

    OpenAIRE

    K., Ida; N., Nakajima

    1996-01-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (mu_perp =2m^2 /s).

  12. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    Science.gov (United States)

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  13. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  14. Data-parallel DNS of turbulent flow

    NARCIS (Netherlands)

    Verstappen, R.W.C.P.; Veldman, A.E.P.; Emerson, DR; Ecer, A; Periaux, J; Satofuka, N

    1998-01-01

    This contribution deals with direct numerical simulation (DNS) of incompressible turbulent flows on parallel computers. We make use of the data-parallel model on shared memory systems as well as on a distributed memory machine. The combination of fast parallel computers and efficient numerical

  15. Minimal surfaces in symmetric spaces with parallel second ...

    Indian Academy of Sciences (India)

    Author Affiliations. XIAOXIANG JIAO1 MINGYAN LI2. School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 101408, China; School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, China ...

  16. Minimal surfaces in symmetric spaces with parallel second ...

    Indian Academy of Sciences (India)

    Xiaoxiang Jiao

    2017-07-31

    Jul 31, 2017 ... ¯KAECDωE. B . (2.16). This covariant derivative ( f C i f D j. ¯KABCD),k must be distinguished from the covariant derivative of ¯KABCD as a curvature tensor of N, which will be denoted by ¯KABCD;E. (see [1]). In this section, we shall assume that N is symmetric, i.e., ¯KABCD;E = 0. It follows from. (2.16) that.

  17. Normality in Analytical Psychology

    Science.gov (United States)

    Myers, Steve

    2013-01-01

    Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity. PMID:25379262

  18. Normal Female Reproductive Anatomy

    Science.gov (United States)

    ... an inner lining called the endometrium. Normal female reproductive system anatomy. Topics/Categories: Anatomy -- Gynecologic Type: Color, Medical Illustration Source: National Cancer Institute Creator: Terese Winslow (Illustrator) AV Number: CDR609921 Date Created: November 17, 2014 Date Added: ...

  19. Normal growth and development

    Science.gov (United States)

    A child's growth and development can be divided into four periods: Infancy Preschool years Middle childhood years Adolescence Soon after birth, an infant normally loses about 5% to 10% of their birth weight. By about age ...

  20. Normal pressure hydrocephalus

    Science.gov (United States)

    Hydrocephalus - occult; Hydrocephalus - idiopathic; Hydrocephalus - adult; Hydrocephalus - communicating; Dementia - hydrocephalus; NPH ... Ferri FF. Normal pressure hydrocephalus. In: Ferri FF, ed. ... Elsevier; 2016:chap 648. Rosenberg GA. Brain edema and disorders ...

  1. Normal Functioning Family

    Science.gov (United States)

    ... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...

  2. Normal Pressure Hydrocephalus

    Science.gov (United States)

    ... improves the chance of a good recovery. Without treatment, symptoms may worsen and cause death. What research is being done? The NINDS conducts and supports research on neurological disorders, including normal pressure hydrocephalus. Research on disorders such ...

  3. Parallel beam scanning system for flatness measurements of thin plates

    Science.gov (United States)

    Fan, Kuang-Chao; Wu, John H.

    1993-09-01

    This paper describes the work to develop a Parallel Beam Scanning System (PBSS) for the non-contact measurement of surface flatness of thin plates. The PBSS consists of a He-Ne laser source having good pointing stability a scanner to create divergent scanning beams a large aplanatic meniscus lens to convert the divergent beams to parallel beams a linear stage to drive the testpiece to each sampling position a screen for the projection of reflected beams from the tested surface and an image processing unit to analyze the projected image. Due to the out-of-flatness of the surface the straight line formed by the incident parallel beams will be distorted and magnified on the screen as it is reflected from the tested surface. The stage then positions the testpiece step-by-step to carry out measurements in the line-by-line sequence. A CCD camera is employed to capture the image of the distorted line on the screen each time. With the proposed mathematical model the flatness data of the testpiece can be computed from the input image data. Experimental results by the use of this system have shown in good agreement with the results obtained from the coordinate measuring machine. This system can be applied to the flatness measurements of thin plates such as sheet metals sheet moulding compound (SMC) plates glass plates etc. which are difficult to measure by traditional methods.

  4. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  5. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  6. Bianchi surfaces: integrability in an arbitrary parametrization

    International Nuclear Information System (INIS)

    Nieszporski, Maciej; Sym, Antoni

    2009-01-01

    We discuss integrability of normal field equations of arbitrarily parametrized Bianchi surfaces. A geometric definition of the Bianchi surfaces is presented as well as the Baecklund transformation for the normal field equations in an arbitrarily chosen surface parametrization.

  7. FAST EDGE DETECTION AND SEGMENTATION OF TERRESTRIAL LASER SCANS THROUGH NORMAL VARIATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    E. Che

    2017-09-01

    Full Text Available Terrestrial Laser Scanning (TLS utilizes light detection and ranging (lidar to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  8. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  9. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  10. Surface Ripples Generated in a Couette Flow with a Free Surface

    Science.gov (United States)

    Masnadi, N.; Washuta, N.; Duncan, J. H.

    2014-11-01

    Free surface ripples created by subsurface turbulence in the gap between a vertical surface-piercing moving wall and a parallel fixed wall are studied experimentally. The moving wall is created with the aide of a meter-wide stainless steel belt that travels horizontally in a loop around two rollers with vertically oriented axes, which are separated by 7.5 meters. One of the two 7.5-m-long belt sections between the rollers is in contact with the water in a large open-surface water tank and forms the moving wall. The fixed wall is an acrylic plate located 4 cm from the belt surface. The water surface ripples are measured in a plane normal to the belt using a cinematic LIF technique. Measurements are done at a location about 100 gap widths downstream of the leading edge of the fixed plate in order to have a fully developed flow condition. It is found that the overall RMS surface fluctuations increase linearly with belt speed. The frequency-domain spectra of the surface height fluctuation and its temporal derivative are computed at locations across the gap width and are used to explore the physics of the free surface motions. The support of the Office of Naval Research is gratefully acknowledged.

  11. Electron transfer in gas surface collisions

    International Nuclear Information System (INIS)

    Wunnik, J.N.M. van.

    1983-01-01

    In this thesis electron transfer between atoms and metal surfaces in general is discussed and the negative ionization of hydrogen by scattering protons at a cesiated crystalline tungsten (110) surface in particular. Experimental results and a novel theoretical analysis are presented. In Chapter I a theoretical overview of resonant electron transitions between atoms and metals is given. In the first part of chapter II atom-metal electron transitions at a fixed atom-metal distance are described on the basis of a model developed by Gadzuk. In the second part the influence of the motion of the atom on the atomic charge state is incorporated. Measurements presented in chapter III show a strong dependence of the fraction of negatively charged H atoms scattered at cesiated tungsten, on the normal as well as the parallel velocity component. In chapter IV the proposed mechanism for the parallel velocity effect is incorporated in the amplitude method. The scattering process of protons incident under grazing angles on a cesium covered surface is studied in chapter V. (Auth.)

  12. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  13. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  14. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  15. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  16. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  17. Monitoring the normal body

    DEFF Research Database (Denmark)

    Nissen, Nina Konstantin; Holm, Lotte; Baarts, Charlotte

    2015-01-01

    provides us with knowledge about how to prevent future overweight or obesity. This paper investigates body size ideals and monitoring practices among normal-weight and moderately overweight people. Methods : The study is based on in-depth interviews combined with observations. 24 participants were...... recruited by strategic sampling based on self-reported BMI 18.5-29.9 kg/m2 and socio-demographic factors. Inductive analysis was conducted. Results : Normal-weight and moderately overweight people have clear ideals for their body size. Despite being normal weight or close to this, they construct a variety...... of practices for monitoring their bodies based on different kinds of calculations of weight and body size, observations of body shape, and measurements of bodily firmness. Biometric measurements are familiar to them as are health authorities' recommendations. Despite not belonging to an extreme BMI category...

  18. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  19. Efecto Zeeman Normal

    OpenAIRE

    Calderón Chamochumbi, Carlos

    2015-01-01

    Se describe el Efecto Zeeman Normal y se presenta una derivación general del torque experimentado por un dipolo magnético debido a su interacción con un campo magnético externo. Los cálculos correspondientes al elemento diferencial de energía potencial magnética y de la energía potencial magnética convencional son estándares. ABSTRACT: The Normal Zeeman Effect is described and a general derivation of the torque undergone by a magnetic dipole due to its interactio...

  20. The normal holonomy group

    International Nuclear Information System (INIS)

    Olmos, C.

    1990-05-01

    The restricted holonomy group of a Riemannian manifold is a compact Lie group and its representation on the tangent space is a product of irreducible representations and a trivial one. Each one of the non-trivial factors is either an orthogonal representation of a connected compact Lie group which acts transitively on the unit sphere or it is the isotropy representation of a single Riemannian symmetric space of rank ≥ 2. We prove that, all these properties are also true for the representation on the normal space of the restricted normal holonomy group of any submanifold of a space of constant curvature. 4 refs

  1. Normal modified stable processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...... Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process...

  2. Medically-enhanced normality

    DEFF Research Database (Denmark)

    Møldrup, Claus; Traulsen, Janine Morgall; Almarsdóttir, Anna Birna

    2003-01-01

    Objective: To consider public perspectives on the use of medicines for non-medical purposes, a usage called medically-enhanced normality (MEN). Method: Examples from the literature were combined with empirical data derived from two Danish research projects: a Delphi internet study and a Telebus......, to optimise economic, working and family conditions. The term "doping" does not cover or explain the use of medicines as enhancement among healthy non-athletes. Conclusion: We recommend wider use of the term medically-enhanced normality as a conceptual framework for understanding and analysing perceptions...... of what is considered rational medicine use in contemporary society....

  3. Integrated Task and Data Parallel Programming

    Science.gov (United States)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

  4. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  5. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  6. Fast Evaluation of Segmentation Quality with Parallel Computing

    Directory of Open Access Journals (Sweden)

    Henry Cruz

    2017-01-01

    Full Text Available In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one.

  7. Dynamic model of a 3-DOF redundantly actuated parallel manipulator

    Directory of Open Access Journals (Sweden)

    Tiemin Li

    2016-09-01

    Full Text Available We investigate the dynamic mode of a 3-degree of freedom (DOF redundantly actuated parallel manipulator by taking the flexible deformation of the limbs into account. The dynamic model is derived using Newton–Euler formulation. Since the number of equations derived from the force and moment equilibrium of the parallel manipulator components is less than the number of unknown variables, the flexible deformation of the limbs is treated as an inequality constraint to find the solution of the dynamic model. The errors of moving platform caused by the flexible deformation of limbs are discussed, and a control strategy is given. To validate the model, the dynamic model is integrated with the control system and compared with the traditional method to minimize the normal driving forces.

  8. Detection Mechanism of Parallel Defect using Scanning Inductive Thermography

    Science.gov (United States)

    Zuo, Xianzhang; Song, Benchu; Hu, Yongjiang; He, Yunze

    2017-06-01

    Aiming at the requirement of workpiece integrity for parts processing line, on-line detection using inductive heating thermography for the moving workpieces on the assembly line is studied. In this paper, the detection mechanism of pulsed eddy current thermography for moving workpieces defects is analysed. A two-dimensional model of a magnetic material (45 steel), on which there is a crack parallel to the coil is established by the finite element software named COMSOL 5.2. By analysing the changes of the temperature curves, normalized curves and the temperature difference curves, the optimal detection area for parallel cracks is proposed. The consistency of the conclusions is verified by the experimental platform. The paper can provide a theoretical guidance for quantitative detection using eddy current thermography.

  9. Sharing of nonlinear load in parallel-connected three-phase converters

    DEFF Research Database (Denmark)

    Borup, Uffe; Blaabjerg, Frede; Enjeti, Prasad N.

    2001-01-01

    compensation are connected in parallel. Without the new solution, they are normally not able to distinguish the harmonic currents that flow to the load and harmonic currents that circulate between the converters. Analysis and experimental results on two 90-kVA 400-Hz converters in parallel are presented......In this paper, a new control method is presented which enables equal sharing of linear and nonlinear loads in three-phase power converters connected in parallel, without communication between the converters. The paper focuses on solving the problem that arises when two converters with harmonic....... The results show that both linear and nonlinear loads can be shared equally by the proposed concept....

  10. Corners of normal matrices

    Indian Academy of Sciences (India)

    ∗Department of Mathematics, University of Toronto, Toronto M5S 2E4, Canada. E-mail: rbh@isid.ac.in; choi@math.toronto.edu. To Kalyan Sinha on his sixtieth birthday. Abstract. We study various conditions on matrices B and C under which they can be the off-diagonal blocks of a partitioned normal matrix. Keywords.

  11. Normality in Analytical Psychology

    Directory of Open Access Journals (Sweden)

    Steve Myers

    2013-11-01

    Full Text Available Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity.

  12. Normalized information distance

    NARCIS (Netherlands)

    Vitányi, P.M.B.; Balbach, F.J.; Cilibrasi, R.L.; Li, M.; Emmert-Streib, F.; Dehmer, M.

    2009-01-01

    The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string

  13. Possible origin and significance of extension-parallel drainages in Arizona's metamophic core complexes

    Science.gov (United States)

    Spencer, J.E.

    2000-01-01

    The corrugated form of the Harcuvar, South Mountains, and Catalina metamorphic core complexes in Arizona reflects the shape of the middle Tertiary extensional detachment fault that projects over each complex. Corrugation axes are approximately parallel to the fault-displacement direction and to the footwall mylonitic lineation. The core complexes are locally incised by enigmatic, linear drainages that parallel corrugation axes and the inferred extension direction and are especially conspicuous on the crests of antiformal corrugations. These drainages have been attributed to erosional incision on a freshly denuded, planar, inclined fault ramp followed by folding that elevated and preserved some drainages on the crests of rising antiforms. According to this hypothesis, corrugations were produced by folding after subacrial exposure of detachment-fault foot-walls. An alternative hypothesis, proposed here, is as follows. In a setting where preexisting drainages cross an active normal fault, each fault-slip event will cut each drainage into two segments separated by a freshly denuded fault ramp. The upper and lower drainage segments will remain hydraulically linked after each fault-slip event if the drainage in the hanging-wall block is incised, even if the stream is on the flank of an antiformal corrugation and there is a large component of strike-slip fault movement. Maintenance of hydraulic linkage during sequential fault-slip events will guide the lengthening stream down the fault ramp as the ramp is uncovered, and stream incision will form a progressively lengthening, extension-parallel, linear drainage segment. This mechanism for linear drainage genesis is compatible with corrugations as original irregularities of the detachment fault, and does not require folding after early to middle Miocene footwall exhumations. This is desirable because many drainages are incised into nonmylonitic crystalline footwall rocks that were probably not folded under low

  14. The Same-Source Parallel MM5

    Directory of Open Access Journals (Sweden)

    John Michalakes

    2000-01-01

    Full Text Available Beginning with the March 1998 release of the Penn State University/NCAR Mesoscale Model (MM5, and continuing through eight subsequent releases up to the present, the official version has run on distributed -memory (DM parallel computers. Source translation and runtime library support minimize the impact of parallelization on the original model source code, with the result that the majority of code is line-for-line identical with the original version. Parallel performance and scaling are equivalent to earlier, hand-parallelized versions; the modifications have no effect when the code is compiled and run without the DM option. Supported computers include the IBM SP, Cray T3E, Fujitsu VPP, Compaq Alpha clusters, and clusters of PCs (so-called Beowulf clusters. The approach also is compatible with shared-memory parallel directives, allowing distributed-memory/shared-memory hybrid parallelization on distributed-memory clusters of symmetric multiprocessors.

  15. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  16. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  17. Runtime volume visualization for parallel CFD

    Science.gov (United States)

    Ma, Kwan-Liu

    1995-01-01

    This paper discusses some aspects of design of a data distributed, massively parallel volume rendering library for runtime visualization of parallel computational fluid dynamics simulations in a message-passing environment. Unlike the traditional scheme in which visualization is a postprocessing step, the rendering is done in place on each node processor. Computational scientists who run large-scale simulations on a massively parallel computer can thus perform interactive monitoring of their simulations. The current library provides an interface to handle volume data on rectilinear grids. The same design principles can be generalized to handle other types of grids. For demonstration, we run a parallel Navier-Stokes solver making use of this rendering library on the Intel Paragon XP/S. The interactive visual response achieved is found to be very useful. Performance studies show that the parallel rendering process is scalable with the size of the simulation as well as with the parallel computer.

  18. Parallel Programming Environment for OpenMP

    Directory of Open Access Journals (Sweden)

    Insung Park

    2001-01-01

    Full Text Available We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools (Ursa Minor and InterPol that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program structure visualization, interactive optimization, support for performance modeling, and performance advising for finding and correcting performance problems. The presented evaluation demonstrates that our environment offers significant support in general parallel tuning efforts and that the toolset facilitates many common tasks in OpenMP parallel programming in an efficient manner.

  19. Parallelism, deep homology, and evo-devo.

    Science.gov (United States)

    Hall, Brian K

    2012-01-01

    Parallelism has been the subject of a number of recent studies that have resulted in reassessment of the term and the process. Parallelism has been aligned with homology leaving convergence as the only case of homoplasy, regarded as a transition between homologous and convergent characters, and defined as the independent evolution of genetic traits. Another study advocates abolishing the term parallelism and treating all cases of the independent evolution of characters as convergence. With the sophistication of modern genomics and genetic analysis, parallelism of characters of the phenotype is being discovered to reflect parallel genetic evolution. Approaching parallelism from developmental and genetic perspectives enables us to tease out the degree to which the reuse of pathways represent deep homology and is a major task for evolutionary developmental biology in the coming decades. © 2012 Wiley Periodicals, Inc.

  20. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  1. Kinematics Analysis of Two Parallel Locomotion Mechanisms

    Science.gov (United States)

    2010-08-27

    1998, "Design Considerations of New Six Degrees-of-Freedom Parallel Robots," Leuven, Belgium, 2, pp. 1327-1333. [19] Angeles, J., Guilin , Y., and...Parallel Manipulators: Identification and Elimination," Minneapolis, MN, USA, 72, pp. 459-466. [25] Dash, A. K., Chen, I. M., Song Huat, Y., and Guilin ...28] Guilin , Y., Chen, I. M., Wei, L., and Angeles, J., 2001, "Singularity Analysis of Three-Legged Parallel Robots Based on Passive-Joint Velocities

  2. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  3. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  4. On-the-fly pipeline parallelism

    OpenAIRE

    Lee, I-Ting Angelina; Leiserson, Charles E.; Sukha, Jim; Zhang, Zhunping; Schardl, Tao Benjamin

    2013-01-01

    Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite design...

  5. Parallel Genetic Algorithm for Alpha Spectra Fitting

    Science.gov (United States)

    García-Orellana, Carlos J.; Rubio-Montero, Pilar; González-Velasco, Horacio

    2005-01-01

    We present a performance study of alpha-particle spectra fitting using parallel Genetic Algorithm (GA). The method uses a two-step approach. In the first step we run parallel GA to find an initial solution for the second step, in which we use Levenberg-Marquardt (LM) method for a precise final fit. GA is a high resources-demanding method, so we use a Beowulf cluster for parallel simulation. The relationship between simulation time (and parallel efficiency) and processors number is studied using several alpha spectra, with the aim of obtaining a method to estimate the optimal processors number that must be used in a simulation.

  6. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  7. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  8. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  9. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  10. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  11. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  12. Parallel QR Decomposition for Electromagnetic Scattering Problems

    National Research Council Canada - National Science Library

    Boleng, Jeff

    1997-01-01

    This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...

  13. Parallel Simulation of Chip-Multiprocessor Architectures

    National Research Council Canada - National Science Library

    Chidester, Matthew C; George, Alan D

    2002-01-01

    Chip-multiprocessor (CMP) architectures present a challenge for efficient simulation, combining the requirements of a detailed microprocessor simulator with that of a tightly-coupled parallel system...

  14. Parallel Prediction of Stock Volatility

    Directory of Open Access Journals (Sweden)

    Priscilla Jenq

    2017-10-01

    Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

  15. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  16. Parallel-Plate Electrostatic Dual Mass Oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Allen, James J.; Dyck, Christopher W.; Huber, Robert J.

    1999-07-22

    A surface-micromachined two-degree-of-freedom system that was driven by parallel-plate actuation at antiresonance was demonstrated. The system consisted of an absorbing mass connected by folded springs to a drive mass. The system demonstrated substantial motion amplification at antiresonance. The absorber mass amplitudes were 0.8-0.85 pm at atmospheric pressure while the drive mass amplitudes were below 0.1 pm. Larger absorber mass amplitudes were not possible because of spring softening in the drive mass springs. Simple theory of the dual-mass oscillator has indicated that the absorber mass may be insensitive to limited variations in strain and damping. This needs experimental verification. Resonant and antiresonant frequencies were measured and compared to the designed values. Resonant frequency measurements were difficult to compare to the design calculations because of time-varying spring softening terms that were caused by the drive configuration. Antiresonant frequency measurements were close to the design value of 5.1 kHz. The antiresonant frequency was not dependent on spring softening. The measured absorber mass displacement at antiresonance was compared to computer simulated results. The measured value was significantly greater, possibly due to neglecting fringe fields in the force expression used in the simulation.

  17. Normal Weight Dyslipidemia

    DEFF Research Database (Denmark)

    Ipsen, David Hojland; Tveden-Nyborg, Pernille; Lykkesfeldt, Jens

    2016-01-01

    Objective: The liver coordinates lipid metabolism and may play a vital role in the development of dyslipidemia, even in the absence of obesity. Normal weight dyslipidemia (NWD) and patients with nonalcoholic fatty liver disease (NAFLD) who do not have obesity constitute a unique subset...... of individuals characterized by dyslipidemia and metabolic deterioration. This review examined the available literature on the role of the liver in dyslipidemia and the metabolic characteristics of patients with NAFLD who do not have obesity. Methods: PubMed was searched using the following keywords: nonobese......, dyslipidemia, NAFLD, NWD, liver, and metabolically obese/unhealthy normal weight. Additionally, article bibliographies were screened, and relevant citations were retrieved. Studies were excluded if they had not measured relevant biomarkers of dyslipidemia. Results: NWD and NAFLD without obesity share a similar...

  18. Idiopathic Normal Pressure Hydrocephalus

    Directory of Open Access Journals (Sweden)

    Basant R. Nassar BS

    2016-04-01

    Full Text Available Idiopathic normal pressure hydrocephalus (iNPH is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” “gait disturbances,” “cognitive function,” “neuropsychology,” “imaging,” and “pathogenesis,” articles were obtained for this review. The majority of the articles were retrieved from the past 10 years. The purpose of this review article is to aid general practitioners in further understanding current findings on the pathogenesis, diagnosis, and treatment of iNPH.

  19. Neuroethics beyond Normal.

    Science.gov (United States)

    Shook, John R; Giordano, James

    2016-01-01

    An integrated and principled neuroethics offers ethical guidelines able to transcend conventional and medical reliance on normality standards. Elsewhere we have proposed four principles for wise guidance on human transformations. Principles like these are already urgently needed, as bio- and cyberenhancements are rapidly emerging. Context matters. Neither "treatments" nor "enhancements" are objectively identifiable apart from performance expectations, social contexts, and civic orders. Lessons learned from disability studies about enablement and inclusion suggest a fresh way to categorize modifications to the body and its performance. The term "enhancement" should be broken apart to permit recognition of enablements and augmentations, and kinds of radical augmentation for specialized performance. Augmentations affecting the self, self-worth, and self-identity of persons require heightened ethical scrutiny. Reversibility becomes the core problem, not the easy answer, as augmented persons may not cooperate with either decommissioning or displacement into unaccommodating societies. We conclude by indicating how our four principles of self-creativity, nonobsolescence, empowerment, and citizenship establish a neuroethics beyond normal that is better prepared for a future in which humans and their societies are going so far beyond normal.

  20. Ethics and "normal birth".

    Science.gov (United States)

    Lyerly, Anne Drapkin

    2012-12-01

    The concept of "normal birth" has been promoted as ideal by several international organizations, although debate about its meaning is ongoing. In this article, I examine the concept of normalcy to explore its ethical implications and raise a trio of concerns. First, in its emphasis on nonuse of technology as a goal, the concept of normalcy may marginalize women for whom medical intervention is necessary or beneficial. Second, in its emphasis on birth as a socially meaningful event, the mantra of normalcy may unintentionally avert attention to meaning in medically complicated births. Third, the emphasis on birth as a normal and healthy event may be a contributor to the long-standing tolerance for the dearth of evidence guiding the treatment of illness during pregnancy and the failure to responsibly and productively engage pregnant women in health research. Given these concerns, it is worth debating not just what "normal birth" means, but whether the term as an ideal earns its keep. © 2012, Copyright the Authors Journal compilation © 2012, Wiley Periodicals, Inc.

  1. Parallelization of pressure equation solver for incompressible N-S equations

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi; Yokokawa, Mitsuo; Kaburaki, Hideo.

    1996-03-01

    A pressure equation solver in a code for 3-dimensional incompressible flow analysis has been parallelized by using red-black SOR method and PCG method on Fujitsu VPP500, a vector parallel computer with distributed memory. For the comparison of scalability, the solver using the red-black SOR method has been also parallelized on the Intel Paragon, a scalar parallel computer with a distributed memory. The scalability of the red-black SOR method on both VPP500 and Paragon was lost, when number of processor elements was increased. The reason of non-scalability on both systems is increasing communication time between processor elements. In addition, the parallelization by DO-loop division makes the vectorizing efficiency lower on VPP500. For an effective implementation on VPP500, a large scale problem which holds very long vectorized DO-loops in the parallel program should be solved. PCG method with red-black SOR method applied to incomplete LU factorization (red-black PCG) has more iteration steps than normal PCG method with forward and backward substitution, in spite of same number of the floating point operations in a DO-loop of incomplete LU factorization. The parallelized red-black PCG method has less merits than the parallelized red-black SOR method when the computational region has fewer grids, because the low vectorization efficiency is obtained in red-black PCG method. (author)

  2. Machine and Collection Abstractions for User-Implemented Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Magne Haveraaen

    2000-01-01

    Full Text Available Data parallelism has appeared as a fruitful approach to the parallelisation of compute-intensive programs. Data parallelism has the advantage of mimicking the sequential (and deterministic structure of programs as opposed to task parallelism, where the explicit interaction of processes has to be programmed. In data parallelism data structures, typically collection classes in the form of large arrays, are distributed on the processors of the target parallel machine. Trying to extract distribution aspects from conventional code often runs into problems with a lack of uniformity in the use of the data structures and in the expression of data dependency patterns within the code. Here we propose a framework with two conceptual classes, Machine and Collection. The Machine class abstracts hardware communication and distribution properties. This gives a programmer high-level access to the important parts of the low-level architecture. The Machine class may readily be used in the implementation of a Collection class, giving the programmer full control of the parallel distribution of data, as well as allowing normal sequential implementation of this class. Any program using such a collection class will be parallelisable, without requiring any modification, by choosing between sequential and parallel versions at link time. Experiments with a commercial application, built using the Sophus library which uses this approach to parallelisation, show good parallel speed-ups, without any adaptation of the application program being needed.

  3. Comparative ultrasound measurement of normal thyroid gland ...

    African Journals Online (AJOL)

    2011-08-31

    Aug 31, 2011 ... the normal thyroid gland has a homogenous increased medium level echo texture. The childhood thyroid gland dimension correlates linearly with age and body surface unlike adults. [14] Iodothyronine (T3) and thyroxine (T4) are thyroid hormones which function to control the basal metabolic rate (BMR).

  4. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  5. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang

    2014-09-26

    Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

  6. Marginal Assessment of Crowns by the Aid of Parallel Radiography

    Directory of Open Access Journals (Sweden)

    Farnaz Fattahi

    2015-03-01

    Full Text Available Introduction: Marginal adaptation is the most critical item in long-term prognosis of single crowns. This study aimed to assess the marginal quality as well asthe discrepancies in marginal integrity of some PFM single crowns of posterior teeth by employing parallel radiography in Shiraz Dental School, Shiraz, Iran. Methods: In this descriptive study, parallel radiographies were taken from 200 fabricated PFM single crowns of posterior teeth after cementation and before discharging the patient. To calculate the magnification of the images, a metallic sphere with the thickness of 4 mm was placed in the direction of the crown margin on the occlusal surface. Thereafter, the horizontal and vertical space between the crown margins, the margin of preparations and also the vertical space between the crown margin and the bone crest were measured by using digital radiological software. Results: Analysis of data by descriptive statistics revealed that 75.5% and 60% of the cases had more than the acceptable space (50µm in the vertical (130±20µm and horizontal (90±15µm dimensions, respectively. Moreover, 85% of patients were found to have either horizontal or vertical gap. In 77% of cases, the margins of crowns invaded the biologic width in the mesial and 70% in distal surfaces. Conclusion: Parallel radiography can be expedient in the stage of framework try-in to yield some important information that cannot be obtained by routine clinical evaluations and may improve the treatment prognosis

  7. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  9. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  10. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  11. Customizable Memory Schemes for Data Parallel Architectures

    NARCIS (Netherlands)

    Gou, C.

    2011-01-01

    Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

  12. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    Science.gov (United States)

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  13. Parallel Narrative Structure in Paul Harding's "Tinkers"

    Science.gov (United States)

    Çirakli, Mustafa Zeki

    2014-01-01

    The present paper explores the implications of parallel narrative structure in Paul Harding's "Tinkers" (2009). Besides primarily recounting the two sets of parallel narratives, "Tinkers" also comprises of seemingly unrelated fragments such as excerpts from clock repair manuals and diaries. The main stories, however, told…

  14. Evalueringsrapport: Projekt Parallel Pædagogik

    DEFF Research Database (Denmark)

    Andreasen, Karen Egedal; Hviid, Marianne Kemeny

    2011-01-01

    Evaluering af udviklingsarbejde om parallel pædagogik på VUC på VUC Sønderjylland og VUC FYN & FYNs HF-kursus......Evaluering af udviklingsarbejde om parallel pædagogik på VUC på VUC Sønderjylland og VUC FYN & FYNs HF-kursus...

  15. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  16. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  17. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  18. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  19. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    found in the Suggested Reading given at the end. Basic Programming Model. A parallel computer can be programmed by providing a program for each processor in it. In most common parallel computer organizations, a processor can only access its local memory. The program provided to each processor may perform ...

  20. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  1. Normal radiological findings

    International Nuclear Information System (INIS)

    Moeller, T.B.

    1987-01-01

    This book is intended for learners in radiology, presenting a wealth of normal radiological findings together with a systematic guide for appraisal and interpretation, and for formulation of reports. The text examples and criteria given will help beginners in learning to 'read' a radiograph, and to verify their conclusions by means of checklists and standard reports. The case material covers numerous illustrations from the following sectors: Skeletal radiography, mammography, tomography, contrast radiography, organ examination by intravenous techniques, arthrography and angiography, and specialized radiography, (ECB) With 184 figs [de

  2. Parallel tempering for the traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Percus, Allon [Los Alamos National Laboratory; Wang, Richard [UCLA MATH DEPT; Hyman, Jeffrey [UCLA MATH DEPT; Caflisch, Russel [UCLA MATH DEPT

    2008-01-01

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulations are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.

  3. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  4. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  5. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  6. Parallel transposition of sparse data structures

    DEFF Research Database (Denmark)

    Wang, Hao; Liu, Weifeng; Hou, Kaixi

    2016-01-01

    transposition for sparse matrices and graphs, have not received the attention they deserve. In this paper, we first identify that the transposition operation can be a bottleneck of some fundamental sparse matrix and graph algorithms. Then, we revisit the performance and scalability of parallel transposition...... approaches on x86-based multi-core and many-core processors. Based on the insights obtained, we propose two new parallel transposition algorithms: ScanTrans and MergeTrans. The experimental results show that our ScanTrans method achieves an average of 2.8-fold (up to 6.2-fold) speedup over the parallel......Many applications in computational sciences and social sciences exploit sparsity and connectivity of acquired data. Even though many parallel sparse primitives such as sparse matrix-vector (SpMV) multiplication have been extensively studied, some other important building blocks, e.g., parallel...

  7. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  8. Parallelization of the molecular dynamics code GROMOS87 for distributed memory parallel architectures

    NARCIS (Netherlands)

    Green, DG; Meacham, KE; vanHoesel, F; Hertzberger, B; Serazzi, G

    1995-01-01

    This paper describes the techniques and methodologies employed during parallelization of the Molecular Dynamics (MD) code GROMOS87, with the specific requirement that the program run efficiently on a range of distributed-memory parallel platforms. We discuss the preliminary results of our parallel

  9. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  10. Single-cell mechanics: the parallel plates technique.

    Science.gov (United States)

    Bufi, Nathalie; Durand-Smet, Pauline; Asnacios, Atef

    2015-01-01

    We describe here the parallel plates technique which enables quantifying single-cell mechanics, either passive (cell deformability) or active (whole-cell traction forces). Based on the bending of glass microplates of calibrated stiffness, it is easy to implement on any microscope, and benefits from protocols and equipment already used in biology labs (coating of glass slides, pipette pullers, micromanipulators, etc.). We first present the principle of the technique, the design and calibration of the microplates, and various surface coatings corresponding to different cell-substrate interactions. Then we detail the specific cell preparation for the assays, and the different mechanical assays that can be carried out. Finally, we discuss the possible technical simplifications and the specificities of each mechanical protocol, as well as the possibility of extending the use of the parallel plates to investigate the mechanics of cell aggregates or tissues. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. 2D-RBUC for efficient parallel compression of residuals

    Science.gov (United States)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  12. Normalization for Implied Volatility

    OpenAIRE

    Fukasawa, Masaaki

    2010-01-01

    We study specific nonlinear transformations of the Black-Scholes implied volatility to show remarkable properties of the volatility surface. Model-free bounds on the implied volatility skew are given. Pricing formulas for the European options which are written in terms of the implied volatility are given. In particular, we prove elegant formulas for the fair strikes of the variance swap and the gamma swap.

  13. State of the art of parallel scientific visualization applications on PC clusters

    International Nuclear Information System (INIS)

    Juliachs, M.

    2004-01-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  14. Momentum-energy transport from turbulence driven by parallel flow shear

    International Nuclear Information System (INIS)

    Dong, J.Q.; Horton, W.; Bengtson, R.D.; Li, G.X.

    1994-04-01

    The low frequency E x B turbulence driven by the shear in the mass flow velocity parallel to the magnetic field is studied using the fluid theory in a slab configuration with magnetic shear. Ion temperature gradient effects are taken into account. The eigenfunctions of the linear instability are asymmetric about the mode rational surfaces. Quasilinear Reynolds stress induced by such asymmetric fluctuations produces momentum and energy transport across the magnetic field. Analytic formulas for the parallel and perpendicular Reynolds stress, viscosity and energy transport coefficients are given. Experimental observations of the parallel and poloidal plasma flows on TEXT-U are presented and compared with the theoretical models

  15. Friction of hydrogels with controlled surface roughness on solid flat substrates.

    Science.gov (United States)

    Yashima, Shintaro; Takase, Natsuko; Kurokawa, Takayuki; Gong, Jian Ping

    2014-05-14

    This study investigated the effect of hydrogel surface roughness on its sliding friction against a solid substrate having modestly adhesive interaction with hydrogels under small normal pressure in water. The friction test was performed between bulk polyacrylamide hydrogels of varied surface roughness and a smooth glass substrate by using a strain-controlled rheometer with parallel-plates geometry. At small pressure (normal strain 1.4-3.6%), the flat surface gel showed a poor reproducibility in friction. In contrast, the gels with a surface roughness of 1-10 μm order showed well reproducible friction behaviors and their frictional stress was larger than that of the flat surface hydrogel. Furthermore, the flat gel showed an elasto-hydrodynamic transition while the rough gels showed a monotonous decrease of friction with velocity. The difference between the flat surface and the rough surface diminished with the increase of the normal pressure. These phenomena are associated with the different contact behaviors of these soft hydrogels in liquid, as revealed by the observation of the interface using a confocal laser microscope.

  16. Position Analysis of a Hybrid Serial-Parallel Manipulator in Immersion Lithography

    Directory of Open Access Journals (Sweden)

    Jie-jie Shao

    2015-01-01

    Full Text Available This paper proposes a novel hybrid serial-parallel mechanism with 6 degrees of freedom. The new mechanism combines two different parallel modules in a serial form. 3-P̲(PH parallel module is architecture of 3 degrees of freedom based on higher joints and specializes in describing two planes’ relative pose. 3-P̲SP parallel module is typical architecture which has been widely investigated in recent researches. In this paper, the direct-inverse position problems of the 3-P̲SP parallel module in the couple mixed-type mode are analyzed in detail, and the solutions are obtained in an analytical form. Furthermore, the solutions for the direct and inverse position problems of the novel hybrid serial-parallel mechanism are also derived and obtained in the analytical form. The proposed hybrid serial-parallel mechanism is applied to regulate the immersion hood’s pose in an immersion lithography system. Through measuring and regulating the pose of the immersion hood with respect to the wafer surface simultaneously, the immersion hood can track the wafer surface’s pose in real-time and the gap status is stabilized. This is another exploration to hybrid serial-parallel mechanism’s application.

  17. Normal Untreated Jurkat Cells

    Science.gov (United States)

    2004-01-01

    Biomedical research offers hope for a variety of medical problems, from diabetes to the replacement of damaged bone and tissues. Bioreactors, which are used to grow cells and tissue cultures, play a major role in such research and production efforts. The objective of the research was to define a way to differentiate between effects due to microgravity and those due to possible stress from non-optimal spaceflight conditions. These Jurkat cells, a human acute T-cell leukemia was obtained to evaluate three types of potential experimental stressors: a) Temperature elevation; b) Serum starvation; and c) Centrifugal force. The data from previous spaceflight experiments showed that actin filaments and cell shape are significantly different for the control. These normal cells serve as the baseline for future spaceflight experiments.

  18. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  19. Potts-model grain growth simulations: Parallel algorithms and applications

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P. [and others

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  20. Anisotropic behaviour of transmission through thin superconducting NbN film in parallel magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Šindler, M., E-mail: sindler@fzu.cz [Institute of Physics ASCR, v. v. i., Cukrovarnická 10, CZ-162 53 Praha 6 (Czech Republic); Tesař, R. [Institute of Physics ASCR, v. v. i., Cukrovarnická 10, CZ-162 53 Praha 6 (Czech Republic); Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, CZ-121 16 Praha (Czech Republic); Koláček, J. [Institute of Physics ASCR, v. v. i., Cukrovarnická 10, CZ-162 53 Praha 6 (Czech Republic); Skrbek, L. [Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, CZ-121 16 Praha (Czech Republic)

    2017-02-15

    Highlights: • Transmission through thin NbN film in parallel magnetic field exhibits strong anisotropic behaviour in the terahertz range. • Response for a polarisation parallel with the applied field is given as weighted sum of superconducting and normal state contributions. • Effective medium approach fails to describe response for linear polarisation perpendicular to the applied magnetic field. - Abstract: Transmission of terahertz waves through a thin layer of the superconductor NbN deposited on an anisotropic R-cut sapphire substrate is studied as a function of temperature in a magnetic field oriented parallel with the sample. A significant difference is found between transmitted intensities of beams linearly polarised parallel with and perpendicular to the direction of applied magnetic field.

  1. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  2. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  3. Distributed parallel messaging for multiprocessor systems

    Science.gov (United States)

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  4. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  5. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  6. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...... also illustrate the use of our algorithms in a new LP algorithm....

  7. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  8. Stranger than fiction: parallel universes beguile science

    CERN Multimedia

    2007-01-01

    We may not be able - at least not yet - to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of effeaded imagination. (1/2 page)

  9. Extracting Parallel Paragraphs from Common Crawl

    Directory of Open Access Journals (Sweden)

    Kúdela Jakub

    2017-04-01

    Full Text Available Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this assumption. We propose an approach based on a combination of bivec (a bilingual extension of word2vec and locality-sensitive hashing which allows us to efficiently identify pairs of parallel segments located anywhere on pages of a given web domain, regardless their structure. We validate our method on realigning segments from a large parallel corpus. Another experiment with real-world data provided by Common Crawl Foundation confirms that our solution scales to hundreds of terabytes large set of web-crawled data.

  10. Structured grid generator on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Murakami, Hiroyuki; Higashida, Akihiro; Yanagisawa, Ichiro.

    1997-03-01

    A general purpose structured grid generator on parallel computers, which generates a large-scale structured grid efficiently, has been developed. The generator is applicable to Cartesian, cylindrical and BFC (Boundary-Fitted Curvilinear) coordinates. In case of BFC grids, there are three adaptable topologies; L-type, O-type and multi-block type, the last of which enables any combination of L- and O-grids. Internal BFC grid points can be automatically generated and smoothed by either algebraic supplemental method or partial differential equation method. The partial differential equation solver is implemented on parallel computers, because it consumes a large portion of overall execution time. Therefore, high-speed processing of large-scale grid generation can be realized by use of parallel computer. Generated grid data are capable to be adjusted to domain decomposition for parallel analysis. (author)

  11. 6th International Parallel Tools Workshop

    CERN Document Server

    Brinkmann, Steffen; Gracia, José; Resch, Michael; Nagel, Wolfgang

    2013-01-01

    The latest advances in the High Performance Computing hardware have significantly raised the level of available compute performance. At the same time, the growing hardware capabilities of modern supercomputing architectures have caused an increasing complexity of the parallel application development. Despite numerous efforts to improve and simplify parallel programming, there is still a lot of manual debugging and  tuning work required. This process  is supported by special software tools, facilitating debugging, performance analysis, and optimization and thus  making a major contribution to the development of  robust and efficient parallel software. This book introduces a selection of the tools, which were presented and discussed at the 6th International Parallel Tools Workshop, held in Stuttgart, Germany, 25-26 September 2012.

  12. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  13. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  14. Density functional theory and parallel processing

    International Nuclear Information System (INIS)

    Ward, R.C.; Geist, G.A.; Butler, W.H.

    1987-01-01

    The authors demonstrate a method for obtaining the ground state energies and charge densities of a system of atoms described within density functional theory using simulated annealing on a parallel computer

  15. Radiation-hard/high-speed parallel optical links

    International Nuclear Information System (INIS)

    Gan, K.K.; Buchholz, P.; Heidbrink, S.; Kagan, H.P.; Kass, R.D.; Moore, J.; Smith, D.S.; Vogt, M.; Ziolkowski, M.

    2016-01-01

    We have designed and fabricated a compact parallel optical engine for transmitting data at 5 Gb/s. The device consists of a 4-channel ASIC driving a VCSEL (Vertical Cavity Surface Emitting Laser) array in an optical package. The ASIC is designed using only core transistors in a 65 nm CMOS process to enhance the radiation-hardness. The ASIC contains an 8-bit DAC to control the bias and modulation currents of the individual channels in the VCSEL array. The performance of the optical engine up at 5 Gb/s is satisfactory.

  16. PARALLEL SOLUTION METHODS OF PARTIAL DIFFERENTIAL EQUATIONS

    Directory of Open Access Journals (Sweden)

    Korhan KARABULUT

    1998-03-01

    Full Text Available Partial differential equations arise in almost all fields of science and engineering. Computer time spent in solving partial differential equations is much more than that of in any other problem class. For this reason, partial differential equations are suitable to be solved on parallel computers that offer great computation power. In this study, parallel solution to partial differential equations with Jacobi, Gauss-Siedel, SOR (Succesive OverRelaxation and SSOR (Symmetric SOR algorithms is studied.

  17. Semantic Language Extensions for Implicit Parallel Programming

    Science.gov (United States)

    2013-09-01

    81], Cyclone [87] Parallel Manual Manual Manual × × Parallel X10 [52], DPJ [37] Programming Intel TBB [162], TPL [131] C++0x [29], Erlang [21] Atomos ...another within WEAKC (see Chapter 5). 44 Several solutions have been proposed that support transactional memory at the language level. Atomos [47] is...Olukotun. The Atomos transactional programming language. In Proceedings of the 2006 ACM SIGPLAN conference on Programming language design and imple

  18. pMatlab Parallel Matlab Library

    OpenAIRE

    Bliss, Nadya; Kepner, Jeremy

    2006-01-01

    MATLAB has emerged as one of the languages most commonly used by scientists and engineers for technical computing, with ~1,000,000 users worldwide. The compute intensive nature of technical computing means that many MATLAB users have codes that can significantly benefit from the increased performance offered by parallel computing. pMatlab (www.ll.mit.edu/pMatlab) provides this capability by implementing Parallel Global Array Semantics (PGAS) using standard operator overloading techniques. The...

  19. .NET 4.5 parallel extensions

    CERN Document Server

    Freeman, Bryan

    2013-01-01

    This book contains practical recipes on everything you will need to create task-based parallel programs using C#, .NET 4.5, and Visual Studio. The book is packed with illustrated code examples to create scalable programs.This book is intended to help experienced C# developers write applications that leverage the power of modern multicore processors. It provides the necessary knowledge for an experienced C# developer to work with .NET parallelism APIs. Previous experience of writing multithreaded applications is not necessary.

  20. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  1. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  2. Stability Analysis Method of Parallel Inverter

    OpenAIRE

    Li, Jun; Chen, Jie; Xue, Yaru; Qiu, Ruichang; Liu, Zhigang

    2017-01-01

    In order to further provide theoretical support for the stability of an auxiliary inverter parallel system, a new model which covers most of control parameters needs to be established. However, the ability of the small-signal model established by the traditional method is extremely limited, so this paper proposes a new small-signal modeling method for the parallel system. The new small-signal model not only can analyze the influence of the droop parameters on the system performance, but also ...

  3. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  4. Biomechanical comparison of sagittal-parallel versus non-parallel pedicle screw placement.

    Science.gov (United States)

    Farshad, Mazda; Farshad-Amacker, Nadja A; Bachmann, Elias; Snedeker, Jess G; Schmid, Samuel L

    2014-11-01

    While convergent placement of pedicle screws in the axial plane is known to be more advantageous biomechanically, surgeons intuitively aim toward a parallel placement of screws in the sagittal plane. It is however not clear whether parallel placement of screws in the sagittal plane is biomechanically superior to a non-parallel construct. The hypothesis of this study is that sagittal non-parallel pedicle screws do not have an inferior initial pull-out strength compared to parallel placed screws. The established lumbar calf spine model was used for determination of pull-out strength in parallel and non-parallel intersegmental pedicle screw constructs. Each of six lumbar calf spines (L1-L6) was divided into three levels: L1/L2, L3/L4 and L5/L6. Each segment was randomly instrumented with pedicle screws (6/45 mm) with either the standard technique of sagittal parallel or non-parallel screw placement, respectively, under fluoroscopic control. CT was used to verify the intrapedicular positioning of all screws. The maximum pull-out forces and type of failure were registered and compared between the groups. The pull-out forces were 5,394 N (range 4,221 N to 8,342 N) for the sagittal non-parallel screws and 5,263 N (range 3,589 N to 7,554 N) for the sagittal-parallel screws (p = 0.838). Interlevel comparisons also showed no statistically significant differences between the groups with no relevant difference in failure mode. Non-parallel pedicle screws in the sagittal plane have at least equal initial fixation strength compared to parallel pedicle screws in the setting of the here performed cadaveric calf spine experiments.

  5. Institutionalizing Normal: Rethinking Composition's Precedence in Normal Schools

    Science.gov (United States)

    Skinnell, Ryan

    2013-01-01

    Composition historians have recently worked to recover histories of composition in normal schools. This essay argues, however, that historians have inadvertently misconstrued the role of normal schools in American education by inaccurately comparing rhetorical education in normal schools to rhetorical education in colleges and universities.…

  6. Normal radiographic heart volume in the neonate. Pt. 2

    International Nuclear Information System (INIS)

    Dahlstroem, A.; Ringertz, H.G.

    1984-01-01

    An approach to optimal assessment of cardiac volume in the neonate is described. 117 normal newborn children between 0 and 15 days of age have been used to establish normal standards. Different normal ranges must be used for the 1st and 2nd day of life. The elective determination of heart volume should, for optimal differentiation between normal and pathological values, preferably be done after the 2nd day of life and compared with the corresponding normal standards. The volume has been related both to body weight and body surface area (BSA). The relative volume in cm 3 per m 2 BSA should be avoided in this age-group. (orig.)

  7. Representing and computing regular languages on massively parallel networks.

    Science.gov (United States)

    Miller, M I; Roysam, B; Smith, K R; O'Sullivan, J A

    1991-01-01

    A general method is proposed for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach establishes the formal connection of rules to Chomsky grammars and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibbs representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs probability law. This coupling yields the result that fully parallel stochastic cellular automata can be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determine the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  8. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  9. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  10. Global image processing operations on parallel architectures

    Science.gov (United States)

    Webb, Jon A.

    1990-09-01

    Image processing operations fall into two classes: local and global. Local operations affect only a small corresponding area in the output image, and include edge detection, smoothing, and point operations. In global operations any input pixel can affect any or a large number of output data. Global operations include histogram, image warping, Hough transform, and connected components. Parallel architectures offer a promising method for speeding up these image processing operations. Local operations are easy to parallelize, because the input data can be divided among processors, processed in parallel separately, then the outputs can be combined by concatenation. Global operations are harder to parallelize. In fact, some global operations cannot be executed in parallel; it is possible for a global operation to require serial execution for correct computation of the result. However, an important class of global operations, namely those that are reversible-that can be computed in forward or reverse order on a data structure-can be computed in parallel using a restricted form of divide and conquer called split and merge. These reversible operations include the global operations mentioned above, and many more besides-even such non-image processing operations as parsing, string search, and sorting. The split and merge method will be illustrated by application of it to these algorithms. Performance analysis of the method on different architectures-one-dimensional, two-dimensional, and binary tree processor arrays will be demonstrated.

  11. A parallel PCG solver for MODFLOW.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.

  12. On the Folded Normal Distribution

    Directory of Open Access Journals (Sweden)

    Michail Tsagris

    2014-02-01

    Full Text Available The characteristic function of the folded normal distribution and its moment function are derived. The entropy of the folded normal distribution and the Kullback–Leibler from the normal and half normal distributions are approximated using Taylor series. The accuracy of the results are also assessed using different criteria. The maximum likelihood estimates and confidence intervals for the parameters are obtained using the asymptotic theory and bootstrap method. The coverage of the confidence intervals is also examined.

  13. Radiation effects in normal tissues

    International Nuclear Information System (INIS)

    Trott, K.R.; Herrmann, T.; Doerr, W.

    2002-01-01

    Knowledge of radiation effects in normal tissues is fundamental for optimal planning of radiotherapy. Therefore, this book presents a review on the following aspects: General pathogenesis of acute radiation effects in normal tissues; general pathogenesis of chronic radiation effects in normal tissues; quantification of acute and chronic radiation effects in normal tissues; pathogenesis, pathology and radiation biology of various organs and organ systems. (MG) [de

  14. Flux pinning and critical current in layered type-II superconductors in parallel magnetic fields

    International Nuclear Information System (INIS)

    Prokic, V.; Davidovic, D.; Dobrosavljevic-Grujic, L.

    1995-01-01

    We have shown, within the Ginzburg-Landau theory, that the interaction between vortices and normal-metal layers in high-T c superconductor--normal-metal superlattices can cause high critical-current densities j c . The interaction is primarily magnetic, except at very low temperatures T, where the core interaction is dominant. For a lattice of vortices commensurate with an array of normal-metal layers in a parallel magnetic field H, strong magnetic pinning is obtained, with a nonmonotonic critical-current dependence on H, and with j c of the order of 10 7 --10 8 A/cm 2

  15. Concurrent computation of attribute filters on shared memory parallel machines

    NARCIS (Netherlands)

    Wilkinson, Michael H.F.; Gao, Hui; Hesselink, Wim H.; Jonker, Jan-Eppo; Meijster, Arnold

    2008-01-01

    Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings,

  16. Algorithms for parallel flow solvers on message passing architectures

    Science.gov (United States)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those

  17. Rosacea Subtypes Visually and Optically Distinct When Viewed with Parallel-Polarized Imaging Technique.

    Science.gov (United States)

    Kwon, In Hyuk; Choi, Jae Eun; Seo, Soo Hong; Kye, Young Chul; Ahn, Hyo Hyun

    2017-04-01

    Parallel-polarized light (PPL) photography evaluates skin characteristics by analyzing light reflections from the skin surface. The aim of this study was to determine the significance of quantitative analysis of PPL images in rosacea patients, and to provide a new objective evaluation method for use in clinical research and practice. A total of 49 rosacea patients were enrolled. PPL images using green and white light emitting diodes (LEDs) were taken of the lesion and an adjacent normal area. The values from the PPL images were converted to CIELAB coordinates: L * corresponding to the brightness, a * to the red and green intensities, and b * to the yellow and blue intensities. A standard grading system showed negative correlations with L * (r=-0.67862, p =0.0108) and b * (r=-0.67862, p =0.0108), and a positive correlation with a * (r=0.64194, p =0.0180) with the green LEDs for papulopustular rosacea (PPR) types. The xerosis severity scale showed a positive correlation with L * (r=0.36709, p =0.0276) and a negative correlation with b * (r=-0.33068, p =0.0489) with the white LEDs for erythematotelangiectatic rosacea (ETR) types. In the ETR types, there was brighter lesional and normal skin with white LEDs and a higher score on the xerosis severity scale than the PPR types. This technique using PPL images is applicable to the quantitative and objective assessment of rosacea in clinical settings. In addition, the two main subtypes of ETR and PPR are distinct entities visually and optically.

  18. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  19. Evolution of Topographic Stress Perturbations Near the Surface of the Earth and Application to Sheeting Joints

    Science.gov (United States)

    Martel, S. J.

    2014-12-01

    Topography perturbs the near-surface stress fields caused by gravity and by regional horizontal stresses. Two-dimensional analytical solutions for elastic stresses in uniform, isotropic rock allow the effects of gravity and a uniform regional horizontal stress P to be distinguished beneath isolated bell-shaped ridges and valleys. The topographic stress perturbations vary depending on the shape of the topography. Gravity, by itself, causes surface-perpendicular and surface-parallel compressive stresses beneath the crest of a bell-shaped ridge. Regional compression contributes a surface-parallel compression atop broad gentle bell-shaped ridges with steepest slopes less than 45°, but a surface-parallel tension atop narrower ridges with steeper slopes. If P is an order of magnitude less compressive than rg|b|, where r is rock density, g is gravitational acceleration, and b is the topographic relief, then effects of gravity dominate effects of the regional compression near the topographic surface. Conversely, if P is an order of magnitude more compressive than rg|b|, then effects of regional compression dominate the effects of gravity, and tensile stresses can develop normal to the surface beneath gentle convex bell-shaped ridges and the convex portions of bell-shaped valleys. The latter conditions promote the widespread development of sheeting joints. The locations of topographic inflection points help define where sheeting joints can develop at a particular time. As erosion progresses and the shape of the topographic surface changes, sheeting joints can form in new areas and be left as relict structures in others. The distribution of sheeting joints thus reflects the dynamic response of geologic systems that evolve through time.

  20. On rationally supported surfaces

    DEFF Research Database (Denmark)

    Gravesen, Jens; Juttler, B.; Sir, Z.

    2008-01-01

    We analyze the class of surfaces which are equipped with rational support functions. Any rational support function can be decomposed into a symmetric (even) and an antisymmetric (odd) part. We analyze certain geometric properties of surfaces with odd and even rational support functions....... In particular it is shown that odd rational support functions correspond to those rational surfaces which can be equipped with a linear field of normal vectors, which were discussed by Sampoli et al. (Sampoli, M.L., Peternell, M., Juttler, B., 2006. Rational surfaces with linear normals and their convolutions...... with rational surfaces. Comput. Aided Geom. Design 23, 179-192). As shown recently, this class of surfaces includes non-developable quadratic triangular Bezier surface patches (Lavicka, M., Bastl, B., 2007. Rational hypersurfaces with rational convolutions. Comput. Aided Geom. Design 24, 410426; Peternell, M...

  1. Enhancement of the transverse Kerr magneto-optic effect by surface magnetoplasma waves

    International Nuclear Information System (INIS)

    Ferguson, P.E.; Stafsudd, O.M.; Wallis, R.F.

    1977-01-01

    The results of a theoretical and experimental investigation of the enhancement of the transverse Kerr magneto-optic effect (TKMOE) in a magnetic thin film due to the onset of surface plasma waves (SMPW) are presented. The magnetic thin film was vacuum deposited onto the base of a half-cylinder glass prism. SPW and SMPW induced at the film-air surface can resonant couple to the optical wave propagating parallel to the glass-film surface. In the presence of resonant coupling, the ordinary metallic reflectivity decreases and the normalized reflectivity difference (measure of the TKMOE) increases. Calculations have been made of the reflectivity and the normalized reflectivity difference as a function of angle of incidence for two iron thin films. In addition calculations have been made of the reflectivity and the normalized reflectivity difference as a function of photon energy and angle of incidence for two nickel films of 160A and 200A thickness. The normalized reflectivity difference and reflectivity have been measured for a thick nickel film and a thin nickel film (160A). An enhancement of the normalized reflectivity difference of 3x has been found. (Auth.)

  2. Surface-wave potential for triggering tectonic (nonvolcanic) tremor

    Science.gov (United States)

    Hill, D.P.

    2010-01-01

    Source processes commonly posed to explain instances of remote dynamic triggering of tectonic (nonvolcanic) tremor by surface waves include frictional failure and various modes of fluid activation. The relative potential for Love- and Rayleigh-wave dynamic stresses to trigger tectonic tremor through failure on critically stressed thrust and vertical strike-slip faults under the Coulomb-Griffith failure criteria as a function of incidence angle is anticorrelated over the 15- to 30-km-depth range that hosts tectonic tremor. Love-wave potential is high for strike-parallel incidence on low-angle reverse faults and null for strike-normal incidence; the opposite holds for Rayleigh waves. Love-wave potential is high for both strike-parallel and strike-normal incidence on vertical, strike-slip faults and minimal for ~45?? incidence angles. The opposite holds for Rayleigh waves. This pattern is consistent with documented instances of tremor triggered by Love waves incident on the Cascadia mega-thrust and the San Andreas fault (SAF) in central California resulting from shear failure on weak faults (apparent friction, ????? 0.2). However, documented instances of tremor triggered by surface waves with strike-parallel incidence along the Nankai megathrust beneath Shikoku, Japan, is associated primarily with Rayleigh waves. This is consistent with the tremor bursts resulting from mixed-mode failure (crack opening and shear failure) facilitated by near-lithostatic ambient pore pressure, low differential stress, with a moderate friction coefficient (?? ~ 0.6) on the Nankai subduction interface. Rayleigh-wave dilatational stress is relatively weak at tectonic tremor source depths and seems unlikely to contribute significantly to the triggering process, except perhaps for an indirect role on the SAF in sustaining tremor into the Rayleigh-wave coda that was initially triggered by Love waves.

  3. Parallel Branch-and-Bound Methods for the Job Shop Scheduling

    DEFF Research Database (Denmark)

    Clausen, Jens; Perregaard, Michael

    1998-01-01

    for the JSS problem, but with limited success. Even with recent methods, it is still not possible to solve problems substantially larger than 10 machines and 10 jobs. In the current study, we focus on parallel methods for solving JSS problems. We implement two different parallel branch-and-bound algorithms......Job-shop scheduling (JSS) problems are among the more difficult to solve in the class of NP-complete problems. The only successful approach has been branch-and-bound based algorithms, but such algorithms depend heavily on good bound functions. Much work has been done to identify such functions...... for JSS on a 16-processor MEIKO computing surface with Intel i860 processors and perform extensive computational testing using classical publicly available benchmark problems. The parallel part of one of the implementations is based on a similar parallel code for quadratic assignment problems. Results...

  4. A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations

    Science.gov (United States)

    Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos

    2009-01-01

    A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.

  5. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  6. Linear Bregman algorithm implemented in parallel GPU

    Science.gov (United States)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  7. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  8. Improvement of Parallel Algorithm for MATRA Code

    International Nuclear Information System (INIS)

    Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun

    2014-01-01

    The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number

  9. Equalizer: a scalable parallel rendering framework.

    Science.gov (United States)

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  10. Apparatus for tomography in which signal profiles gathered from divergent radiation can be reconstructed in signal profiles, each corresponding with a beam of parallel rays

    International Nuclear Information System (INIS)

    1976-01-01

    A tomograph which is capable of gathering divergent radiations and reconstruct them in signal profiles or images each corresponding with a beam of parallel rays is discussed which may eliminate the interfering point dispersion function which normally occurs

  11. Surface alloys as interfacial layers between quasicrystalline and periodic materials

    Science.gov (United States)

    Duguet, T.; Ledieu, J.; Dubois, J. M.; Fournée, V.

    2008-08-01

    Low adhesion with normal metals is an intrinsic property of many quasicrystalline surfaces. Although this property could be useful to develop low friction or non-stick coatings, it is also responsible for the poor adhesion of quasicrystalline coatings on metal substrates. Here we investigate the possibility of using complex metallic surface alloys as interface layers to enhance the adhesion between quasicrystals and simple metal substrates. We first review some examples where such complex phases are formed as an overlayer. Then we study the formation of such surface alloys in a controlled way by annealing a thin film deposited on a quasicrystalline substrate. We demonstrate that a coherent buffer layer consisting of the γ-Al4Cu9 approximant can be grown between pure Al and the i-Al-Cu-Fe quasicrystal. The interfacial relationships between the different layers are defined by [111]_{\\mathrm {Al}}\\parallel [110]_{\\mathrm {Al_4Cu_9}}\\parallel [5\\mathrm {f}]_{i\\mbox {-}\\mathrm {Al\\mbox {--}Cu \\mbox {--}Fe}} .

  12. Surface alloys as interfacial layers between quasicrystalline and periodic materials

    Energy Technology Data Exchange (ETDEWEB)

    Duguet, T; Ledieu, J; Dubois, J M; Fournee, V [Laboratoire de Science et Genie des Materiaux et de Metallurgie, UMR 7584 CNRS-Nancy Universite, Ecole des Mines de Nancy, Parc de Saurupt, F-54042 Nancy (France)], E-mail: fournee@lsg2m.org

    2008-08-06

    Low adhesion with normal metals is an intrinsic property of many quasicrystalline surfaces. Although this property could be useful to develop low friction or non-stick coatings, it is also responsible for the poor adhesion of quasicrystalline coatings on metal substrates. Here we investigate the possibility of using complex metallic surface alloys as interface layers to enhance the adhesion between quasicrystals and simple metal substrates. We first review some examples where such complex phases are formed as an overlayer. Then we study the formation of such surface alloys in a controlled way by annealing a thin film deposited on a quasicrystalline substrate. We demonstrate that a coherent buffer layer consisting of the {gamma}-Al{sub 4}Cu{sub 9} approximant can be grown between pure Al and the i-Al-Cu-Fe quasicrystal. The interfacial relationships between the different layers are defined by [111]{sub Al} parallel [110]{sub Al4Cu9} parallel [5f]{sub i-Al-}C{sub u-Fe}.

  13. Extension parallel to the rift zone during segmented fault growth: application to the evolution of the NE Atlantic

    Directory of Open Access Journals (Sweden)

    A. Bubeck

    2017-11-01

    Full Text Available The mechanical interaction of propagating normal faults is known to influence the linkage geometry of first-order faults, and the development of second-order faults and fractures, which transfer displacement within relay zones. Here we use natural examples of growth faults from two active volcanic rift zones (Koa`e, island of Hawai`i, and Krafla, northern Iceland to illustrate the importance of horizontal-plane extension (heave gradients, and associated vertical axis rotations, in evolving continental rift systems. Second-order extension and extensional-shear faults within the relay zones variably resolve components of regional extension, and components of extension and/or shortening parallel to the rift zone, to accommodate the inherently three-dimensional (3-D strains associated with relay zone development and rotation. Such a configuration involves volume increase, which is accommodated at the surface by open fractures; in the subsurface this may be accommodated by veins or dikes oriented obliquely and normal to the rift axis. To consider the scalability of the effects of relay zone rotations, we compare the geometry and kinematics of fault and fracture sets in the Koa`e and Krafla rift zones with data from exhumed contemporaneous fault and dike systems developed within a > 5×104 km2 relay system that developed during formation of the NE Atlantic margins. Based on the findings presented here we propose a new conceptual model for the evolution of segmented continental rift basins on the NE Atlantic margins.

  14. Transfer matrix method applied to the parallel assembly of sound absorbing materials.

    Science.gov (United States)

    Verdière, Kévin; Panneton, Raymond; Elkoun, Saïd; Dupont, Thomas; Leclaire, Philippe

    2013-12-01

    The transfer matrix method (TMM) is used conventionally to predict the acoustic properties of laterally infinite homogeneous layers assembled in series to form a multilayer. In this work, a parallel assembly process of transfer matrices is used to model heterogeneous materials such as patchworks, acoustic mosaics, or a collection of acoustic elements in parallel. In this method, it is assumed that each parallel element can be modeled by a 2 × 2 transfer matrix, and no diffusion exists between elements. The resulting transfer matrix of the parallel assembly is also a 2 × 2 matrix that can be assembled in series with the classical TMM. The method is validated by comparison with finite element (FE) simulations and acoustical tube measurements on different parallel/series configurations at normal and oblique incidence. The comparisons are in terms of sound absorption coefficient and transmission loss on experimental and simulated data and published data, notably published data on a parallel array of resonators. From these comparisons, the limitations of the method are discussed. Finally, applications to three-dimensional geometries are studied, where the geometries are discretized as in a FE concept. Compared to FE simulations, the extended TMM yields similar results with a trivial computation time.

  15. Atomic force microscopy measurements of topography and friction on dotriacontane films adsorbed on a SiO2 surface

    DEFF Research Database (Denmark)

    Trogisch, S.; Simpson, M.J.; Taub, H.

    2005-01-01

    We report comprehensive atomic force microscopy (AFM) measurements at room temperature of the nanoscale topography and lateral friction on the surface of thin solid films of an intermediate-length normal alkane, dotriacontane (n-C32H66), adsorbed onto a SiO2 surface. Our topographic and frictional...... their location. Above a minimum size, the bulk particles are separated from islands of perpendicularly oriented molecules by regions of exposed parallel layers that most likely extend underneath the particles. We find that the lateral friction is sensitive to the molecular orientation in the underlying...... crystalline film and can be used effectively with topographic measurements to resolve uncertainties in the film structure. We measure the same lateral friction on top of the bulk particles as on the perpendicular layers, a value that is about 2.5 times smaller than on a parallel layer. Scans on top...

  16. A parallel computational model for GATE simulations.

    Science.gov (United States)

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  18. Applications of Parallel Processing in Mobile Banking

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The future of mobile banking will be represented by such applications that support mobile, Internet banking and EFT (Electronic Funds Transfer transactions in a single user interface. In such a way, the mobile banking will be able to cover all the types of applications demanded at the market level. The parallel processing of credit card bank transactions could be performed with the help of a grid network. Excluding some limitations, the grid processing offers huge opportunities to exploit the parallelism. For this reason, a lot of applications of waiting queues in grid processing were developed in the last years. Grid networks represent a distinctive and very modern field of the parallel and distributed processing.

  19. Distributed Parallel Architecture for "Big Data"

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2012-01-01

    Full Text Available This paper is an extension to the "Distributed Parallel Architecture for Storing and Processing Large Datasets" paper presented at the WSEAS SEPADS’12 conference in Cambridge. In its original version the paper went over the benefits of using a distributed parallel architecture to store and process large datasets. This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. It provides a survey on current distributed and parallel data processing technologies and, based on them, will propose an architecture that can be used to solve the analyzed problem. In this version there is more emphasis put on distributed files systems and the ETL processes involved in a distributed environment.

  20. Parallel Implementation of the Katsevich's FBP Algorithm

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available For spiral cone-beam CT, parallel computing is an effective approach to resolving the problem of heavy computation burden. It is well known that the major computation time is spent in the backprojection step for either filtered-backprojection (FBP or backprojected-filtration (BPF algorithms. By the cone-beam cover method [1], the backprojection procedure is driven by cone-beam projections, and every cone-beam projection can be backprojected independently. Basing on this fact, we develop a parallel implementation of Katsevich's FBP algorithm. We do all the numerical experiments on a Linux cluster. In one typical experiment, the sequential reconstruction time is 781.3 seconds, while the parallel reconstruction time is 25.7 seconds with 32 processors.