WorldWideScience

Sample records for regular cellular distribution

  1. Regular cellular distribution of plasmids by oscillating and filament-forming ParA ATPase of plasmid pB171

    DEFF Research Database (Denmark)

    Ebersbach, Gitte; Ringgaard, Simon; Møller-Jensen, Jakob

    2006-01-01

    with each other in a bacterial two-hybrid assay but do not interact with FtsZ, eight other essential cell division proteins or MreB actin. Based on these observations, we propose a simple model for how oscillating ParA filaments can mediate regular cellular distribution of plasmids. The model functions...

  2. Compression behavior of cellular metals with inhomogeneous mass distribution

    International Nuclear Information System (INIS)

    Foroughi, B.

    2001-05-01

    Mechanical behavior of two types of closed cell metals (ALULIGHT and ALPORAS) is investigated experimentally and numerically. Compressive tests performed on prismatic specimens indicate that inhomogeneities in the mass density distribution are a key factor in the deformation behavior of cellular metals. The three dimensional cellular structure of the investigated specimens is recorded using x-ray medical computed tomography (CT). A special procedure called density mapping method has been used to transfer the recorded CT data into a continuum by averaging over a certain domain (averaging domain). This continuum model is implemented using finite elements to study the effect of variations in local mass densities. The finite element model is performed by a simple regular discretization of a specimen's volume with elements which have constant edge length. Mechanical properties derived from compression tests of ALPORAS samples are assigned to the corresponding mesoscopic density value of each element. The effect of averaging domain size is studied to obtain a suitable dimension which fulfils the homogenization requirements and allows the evaluation of inhomogenities in the specimens. The formation and propagation of deformation band(s) and stress-strain responses of tested cellular metals are modeled with respect to their mass distribution. It is shown that the inhomogeneous density distribution leads to plastic strain localization and causes a monotonically increase of the stress in the plateau regime although no hardening response was considered for homogeneous material in this regime. The simulated plastic strain localization and the calculated stress-strain responses are compared with the experimental results. The stiffness values of experiment and simulation agree very well for both cellular materials. The values of plateau strength as well, but it differs in some cases of ALULIGHT samples, where the hardening response can be predicted at least qualitatively

  3. Cellular Automata Simulation for Wealth Distribution

    Science.gov (United States)

    Lo, Shih-Ching

    2009-08-01

    Wealth distribution of a country is a complicate system. A model, which is based on the Epstein & Axtell's "Sugars cape" model, is presented in Netlogo. The model considers the income, age, working opportunity and salary as control variables. There are still other variables should be considered while an artificial society is established. In this study, a more complicate cellular automata model for wealth distribution model is proposed. The effects of social welfare, tax, economical investment and inheritance are considered and simulated. According to the cellular automata simulation for wealth distribution, we will have a deep insight of financial policy of the government.

  4. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2008-01-01

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution

  5. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    Energy Technology Data Exchange (ETDEWEB)

    Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net

    2008-11-15

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.

  6. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    Science.gov (United States)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  7. Regularized κ-distributions with non-diverging moments

    Science.gov (United States)

    Scherer, K.; Fichtner, H.; Lazar, M.

    2017-12-01

    For various plasma applications the so-called (non-relativistic) κ-distribution is widely used to reproduce and interpret the suprathermal particle populations exhibiting a power-law distribution in velocity or energy. Despite its reputation the standard κ-distribution as a concept is still disputable, mainly due to the velocity moments M l which make a macroscopic characterization possible, but whose existence is restricted only to low orders l definition of the κ-distribution itself is conditioned by the existence of the moment of order l = 2 (i.e., kinetic temperature) satisfied only for κ > 3/2 . In order to resolve these critical limitations we introduce the regularized κ-distribution with non-diverging moments. For the evaluation of all velocity moments a general analytical expression is provided enabling a significant step towards a macroscopic (fluid-like) description of space plasmas, and, in general, any system of κ-distributed particles.

  8. Reduction of Nambu-Poisson Manifolds by Regular Distributions

    Science.gov (United States)

    Das, Apurba

    2018-03-01

    The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.

  9. Quasi-regular impurity distribution driven by charge-density wave

    International Nuclear Information System (INIS)

    Baldea, I.; Badescu, M.

    1991-09-01

    The displacive motion of the impurity distribution immersed into the one-dimensional system has recently been studied in detail as one kind of quasi-regularity driven by CDW. As a further investigation of this problem we develop here a microscopical model for a different kind of quasi-regular impurity distribution driven by CDW, consisting of the modulation in the probability of occupied sites. The dependence on impurity concentration and temperature of relevant CDW quantities is obtained. Data reported in the quasi-1D materials NbSe 3 and Ta 2 NiSe 7 (particularly, thermal hysteresis effects at CDW transition) are interpreted in the framework of the present model. Possible similarities to other physical systems are also suggested. (author). 38 refs, 7 figs

  10. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  11. Regularization and asymptotic expansion of certain distributions defined by divergent series

    Directory of Open Access Journals (Sweden)

    Ricardo Estrada

    1995-01-01

    Full Text Available The regularization of the distribution ∑n=−∞∞δ(x−pn. which gives a regularized value to the divergent series ∑n=−∞∞φ(pn is obtained in several spaces of test functions. The asymptotic expansion as ϵ→0+of series of the type ∑n=0∞φ(ϵ pn is also obtained.

  12. Distributed Velocity-Dependent Protocol for Multihop Cellular Sensor Networks

    Directory of Open Access Journals (Sweden)

    Deepthi Chander

    2009-01-01

    Full Text Available Cell phones are embedded with sensors form a Cellular Sensor Network which can be used to localize a moving event. The inherent mobility of the application and of the cell phone users warrants distributed structure-free data aggregation and on-the-fly routing. We propose a Distributed Velocity-Dependent (DVD protocol to localize a moving event using a Multihop Cellular Sensor Network (MCSN. DVD is based on a novel form of connectivity determined by the waiting time of nodes for a Random Waypoint (RWP distribution of cell phone users. This paper analyzes the time-stationary and spatial distribution of the proposed waiting time to explain the superior event localization and delay performances of DVD over the existing Randomized Waiting (RW protocol. A sensitivity analysis is also performed to compare the performance of DVD with RW and the existing Centralized approach.

  13. Distributed Velocity-Dependent Protocol for Multihop Cellular Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jagyasi Bhushan

    2009-01-01

    Full Text Available Abstract Cell phones are embedded with sensors form a Cellular Sensor Network which can be used to localize a moving event. The inherent mobility of the application and of the cell phone users warrants distributed structure-free data aggregation and on-the-fly routing. We propose a Distributed Velocity-Dependent (DVD protocol to localize a moving event using a Multihop Cellular Sensor Network (MCSN. DVD is based on a novel form of connectivity determined by the waiting time of nodes for a Random Waypoint (RWP distribution of cell phone users. This paper analyzes the time-stationary and spatial distribution of the proposed waiting time to explain the superior event localization and delay performances of DVD over the existing Randomized Waiting (RW protocol. A sensitivity analysis is also performed to compare the performance of DVD with RW and the existing Centralized approach.

  14. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  15. Experimental investigations on frictional resistance and velocity distribution of rough wall with regularly distributed triangular ribs

    International Nuclear Information System (INIS)

    Motozawa, Masaaki; Ito, Takahiro; Iwamoto, Kaoru; Kawashima, Hideki; Ando, Hirotomo; Senda, Tetsuya; Tsuji, Yoshiyuki; Kawaguchi, Yasuo

    2013-01-01

    Highlights: • Flow over the regularly distributed triangular ribs was investigated. • Simultaneous measurement of flow resistance and velocity profile was performed. • Flow resistance was measured directly and velocity profile was measured by LDV. • Flow resistance was estimated by the information of the velocity field. • Estimated flow resistance has good agreement with the measured flow resistance. -- Abstract: The relationship between the flow resistance of a turbulent flow over triangular ribs regularly distributed on a wall surface and the velocity distribution around the ribs was investigated experimentally. A concentric cylinder device composed of an inner test cylinder and an outer cylinder was employed to measure the flow resistance using the torque of the shaft of the inner cylinder and the velocity distribution of the flow around a rib by laser Doppler velocimetry (LDV) simultaneously. We prepared four inner test cylinders having 4, 8, 12 and 16 triangular ribs on the surface with the same interval between them. Each rib had an isosceles right triangle V-shape and a height of 2 mm. To investigate the relationship between flow resistance and velocity distribution, we estimated the frictional drag and pressure drag acting on the surface of the ribs separately using the velocity distribution. Therefore, we could also estimate the total flow resistance using the velocity distribution. As a result of the experiment, the flow resistance and the attachment point downstream of the rib were shown to depend on the distance between ribs. Moreover, the flow resistance estimated using the velocity distribution had good agreement with the flow resistance measured using the torque of the inner cylinder

  16. Stress Distribution in Graded Cellular Materials Under Dynamic Compression

    Directory of Open Access Journals (Sweden)

    Peng Wang

    Full Text Available Abstract Dynamic compression behaviors of density-homogeneous and density-graded irregular honeycombs are investigated using cell-based finite element models under a constant-velocity impact scenario. A method based on the cross-sectional engineering stress is developed to obtain the one-dimensional stress distribution along the loading direction in a cellular specimen. The cross-sectional engineering stress is contributed by two parts: the node-transitive stress and the contact-induced stress, which are caused by the nodal force and the contact of cell walls, respectively. It is found that the contact-induced stress is dominant for the significantly enhanced stress behind the shock front. The stress enhancement and the compaction wave propagation can be observed through the stress distributions in honeycombs under high-velocity compression. The single and double compaction wave modes are observed directly from the stress distributions. Theoretical analysis of the compaction wave propagation in the density-graded honeycombs based on the R-PH (rigid-plastic hardening idealization is carried out and verified by the numerical simulations. It is found that stress distribution in cellular materials and the compaction wave propagation characteristics under dynamic compression can be approximately predicted by the R-PH shock model.

  17. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  18. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  19. Sub-cellular distribution and translocation of TRP channels.

    Science.gov (United States)

    Toro, Carlos A; Arias, Luis A; Brauchi, Sebastian

    2011-01-01

    Cellular electrical activity is the result of a highly complex processes that involve the activation of ion channel proteins. Ion channels make pores on cell membranes that rapidly transit between conductive and non-conductive states, allowing different ions to flow down their electrochemical gradients across cell membranes. In the case of neuronal cells, ion channel activity orchestrates action potentials traveling through axons, enabling electrical communication between cells in distant parts of the body. Somatic sensation -our ability to feel touch, temperature and noxious stimuli- require ion channels able to sense and respond to our peripheral environment. Sensory integration involves the summing of various environmental cues and their conversion into electrical signals. Members of the Transient Receptor Potential (TRP) family of ion channels have emerged as important mediators of both cellular sensing and sensory integration. The regulation of the spatial and temporal distribution of membrane receptors is recognized as an important mechanism for controlling the magnitude of the cellular response and the time scale on which cellular signaling occurs. Several studies have shown that this mechanism is also used by TRP channels to modulate cellular response and ultimately fulfill their physiological function as sensors. However, the inner-working of this mode of control for TRP channels remains poorly understood. The question of whether TRPs intrinsically regulate their own vesicular trafficking or weather the dynamic regulation of TRP channel residence on the cell surface is caused by extrinsic changes in the rates of vesicle insertion or retrieval remain open. This review will examine the evidence that sub-cellular redistribution of TRP channels plays an important role in regulating their activity and explore the mechanisms that control the trafficking of vesicles containing TRP channels.

  20. Guangxi crustal structural evolution and the formation and distribution regularities of U-rich strata

    International Nuclear Information System (INIS)

    Kang Zili.

    1989-01-01

    Based on summing up Guangxi geotectonic features and evolutionary regularities, this paper discusses the occurrence features, formation conditions and time-space distribution regularities of various U-rich strata during the development of geosyncline, platform and diwa stages, Especially, during diwa stage all those U-rich strata might be reworked to a certain degree and resulted in the mobilization of uranium, then enriching to form polygenetic composite uranium ore deposits with stratabound features. This study will be helpful for prospecting in the region

  1. Physical Property Control on the Cellular Uptake Pathway and Spatial Distribution of Nanoparticles in Cells.

    Science.gov (United States)

    Ahn, Sungsook; Seo, Eunseok; Kim, Ki Hean; Lee, Sang Joon

    2015-06-01

    Nanoparticles have been developed in broad biomedical research in terms of effective cellular interactions to treat and visualize diseased cells. Considering the charge and polar functional groups of proteins that are embedded in cellular membranes, charged nanoparticles have been strategically developed to enhance electrostatic cellular interactions. In this study, we show that cellular uptake efficiency, pathway, and spatial distribution of gold nanoparticles in a cell are significantly modulated based on the surface condition of gold nanoparticles and human cancer cells that were tuned by controlling the pH of the medium and by introducing an electron beam. Cellular uptake efficiency is increased when electrostatic attraction is induced between the cells and the gold nanoparticles. Cell surface modification changes the cellular uptake pathways of the gold nanoparticles and concentrates the gold nanoparticles at the membrane region. Surface modification of the gold nanoparticles also contributes to deep penetration and homogeneous spatial distributions in a cell.

  2. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  3. Cellular- and micro-dosimetry of heterogeneously distributed tritium.

    Science.gov (United States)

    Chao, Tsi-Chian; Wang, Chun-Ching; Li, Junli; Li, Chunyan; Tung, Chuan-Jong

    2012-01-01

    The assessment of radiotoxicity for heterogeneously distributed tritium should be based on the subcellular dose and relative biological effectiveness (RBE) for cell nucleus. In the present work, geometry-dependent absorbed dose and RBE were calculated using Monte Carlo codes for tritium in the cell, cell surface, cytoplasm, or cell nucleus. Penelope (PENetration and Energy LOss of Positrins and Electrons) code was used to calculate the geometry-dependent absorbed dose, lineal energy, and electron fluence spectrum. RBE for the intestinal crypt regeneration was calculated using a lineal energy-dependent biological weighting function. RBE for the induction of DNA double strand breaks was estimated using a nucleotide-level map for clustered DNA lesions of the Monte Carlo damage simulation (MCDS) code. For a typical cell of 10 μm radius and 5 μm nuclear radius, tritium in the cell nucleus resulted in much higher RBE-weighted absorbed dose than tritium distributed uniformly. Conversely, tritium distributed on the cell surface led to trivial RBE-weighted absorbed dose due to irradiation geometry and great attenuation of beta particles in the cytoplasm. For tritium uniformly distributed in the cell, the RBE-weighted absorbed dose was larger compared to tritium uniformly distributed in the tissue. Cellular- and micro-dosimetry models were developed for the assessment of heterogeneously distributed tritium.

  4. The cellular distribution of histone H5 in embryonic and adult tissues of Xenopus laevis and chicken

    NARCIS (Netherlands)

    Moorman, A. F.; de Boer, P. A.; Lamers, W. H.; Charles, R.

    1986-01-01

    The cellular distribution of histone H5 in embryonic and adult tissues of Xenopus laevis and chicken has been established with monoclonal antibodies to histone H5. Both in Xenopus and in chicken, the protein has presumably a more widespread cellular distribution than hitherto expected but is absent

  5. The Influence of nonuniform activity distribution on cellular dosimetry

    International Nuclear Information System (INIS)

    Naling, Song; Yuan, Tian; Liangan, Zhang; Guangfu, Dai

    2008-01-01

    S value is an important parameter in determination of absorbed dose in nuclear medicine and radiobiology. The distribution of radioactivity shows significant influence on the S value especially in microdosimetry. In present work, a semi Monte Carlo Model is developed to calculate the microdosimetric cellular S value for different micro-distributions of radioactivity, i.e. uniform, linear increase, linear decrease, exponential increase, exponential decrease and centroid distribution. Emission of alpha particles is simulated by Monte Carlo model and the energy imparted to the target volume is calculated by the analytical Continuous Slowing Down Approximation (CSDA) model and the spline interpolation of range-energy relationship. We calculate tables of S values for 213 Po and 210 Po with various dimensions and most important with various possible micro-distributions of radioactivity, such as linear increase, linear decrease, exponential increase and exponential decrease. Then we compare the S values from cell to cell of uniform distribution with the Hamacher's results to test the feasibility of our model. S values of some nonuniform micro-distributions are compared to the corresponding data of the uniform distribution. The possible sources of these differences are theoretical analyzed. (author)

  6. Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2015-01-01

    Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.

  7. Some regularity of the grain size distribution in nuclear fuel with controllable structure

    International Nuclear Information System (INIS)

    Loktev, Igor

    2008-01-01

    It is known, the fission gas release from ceramic nuclear fuel depends from average size of grains. To increase grain size they use additives which activate sintering of pellets. However, grain size distribution influences on fission gas release also. Fuel with different structures, but with the same average size of grains has different fission gas release. Other structure elements, which influence operational behavior of fuel, are pores and inclusions. Earlier, in Kyoto, questions of distribution of grain size for fuel with 'natural' structure were discussed. Some regularity of grain size distribution of fuel with controllable structure and high average size of grains are considered in the report. Influence of inclusions and pores on an error of the automated definition of parameters of structure is shown. The criterion, which describe of behavior of fuel with specific grain size distribution, is offered

  8. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  9. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  10. Stability analysis of impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms

    International Nuclear Information System (INIS)

    Li Zuoan; Li Kelin

    2009-01-01

    In this paper, we investigate a class of impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms. By employing the delay differential inequality with impulsive initial conditions and M-matrix theory, we find some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms. In particular, the estimate of the exponential converging index is also provided, which depends on the system parameters. An example is given to show the effectiveness of the results obtained here.

  11. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  12. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  13. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  14. On the distribution and mean of received power in stochastic cellular network

    OpenAIRE

    Cao, Fengming; Ganesh, Ayalvadi; Armour, Simon; Sooriyabandara, Mahesh

    2016-01-01

    This paper exploits the distribution and mean of received power for cellular network with stochastic network modeling to study the difference between the two cell association criteria, i.e. the strongest received power based cell association and the closest distance based cell association. Consequently we derive the analytical expression of the distribution and the mean of the nth strongest received power and the received power from the nth nearest base station and the derivations have been c...

  15. Global exponential stability of mixed discrete and distributively delayed cellular neural network

    International Nuclear Information System (INIS)

    Yao Hong-Xing; Zhou Jia-Yan

    2011-01-01

    This paper concernes analysis for the global exponential stability of a class of recurrent neural networks with mixed discrete and distributed delays. It first proves the existence and uniqueness of the balance point, then by employing the Lyapunov—Krasovskii functional and Young inequality, it gives the sufficient condition of global exponential stability of cellular neural network with mixed discrete and distributed delays, in addition, the example is provided to illustrate the applicability of the result. (general)

  16. A celiac cellular phenotype, with altered LPP sub-cellular distribution, is inducible in controls by the toxic gliadin peptide P31-43.

    Directory of Open Access Journals (Sweden)

    Merlin Nanayakkara

    Full Text Available Celiac disease (CD is a frequent inflammatory intestinal disease, with a genetic background, caused by gliadin-containing food. Undigested gliadin peptides P31-43 and P57-68 induce innate and adaptive T cell-mediated immune responses, respectively. Alterations in the cell shape and actin cytoskeleton are present in celiac enterocytes, and gliadin peptides induce actin rearrangements in both the CD mucosa and cell lines. Cell shape is maintained by the actin cytoskeleton and focal adhesions, sites of membrane attachment to the extracellular matrix. The locus of the human Lipoma Preferred Partner (LPP gene was identified as strongly associated with CD using genome-wide association studies (GWAS. The LPP protein plays an important role in focal adhesion architecture and acts as a transcription factor in the nucleus. In this study, we examined the hypothesis that a constitutive alteration of the cell shape and the cytoskeleton, involving LPP, occurs in a cell compartment far from the main inflammation site in CD fibroblasts from skin explants. We analyzed the cell shape, actin organization, focal adhesion number, focal adhesion proteins, LPP sub-cellular distribution and adhesion to fibronectin of fibroblasts obtained from CD patients on a Gluten-Free Diet (GFD and controls, without and with treatment with A-gliadin peptide P31-43. We observed a "CD cellular phenotype" in these fibroblasts, characterized by an altered cell shape and actin organization, increased number of focal adhesions, and altered intracellular LPP protein distribution. The treatment of controls fibroblasts with gliadin peptide P31-43 mimics the CD cellular phenotype regarding the cell shape, adhesion capacity, focal adhesion number and LPP sub-cellular distribution, suggesting a close association between these alterations and CD pathogenesis.

  17. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  18. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  19. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  20. Outer-totalistic cellular automata on graphs

    International Nuclear Information System (INIS)

    Marr, Carsten; Huett, Marc-Thorsten

    2009-01-01

    We present an intuitive formalism for implementing cellular automata on arbitrary topologies. By that means, we identify a symmetry operation in the class of elementary cellular automata. Moreover, we determine the subset of topologically sensitive elementary cellular automata and find that the overall number of complex patterns decreases under increasing neighborhood size in regular graphs. As exemplary applications, we apply the formalism to complex networks and compare the potential of scale-free graphs and metabolic networks to generate complex dynamics

  1. Using Regularization to Infer Cell Line Specificity in Logical Network Models of Signaling Pathways

    Directory of Open Access Journals (Sweden)

    Sébastien De Landtsheer

    2018-05-01

    Full Text Available Understanding the functional properties of cells of different origins is a long-standing challenge of personalized medicine. Especially in cancer, the high heterogeneity observed in patients slows down the development of effective cures. The molecular differences between cell types or between healthy and diseased cellular states are usually determined by the wiring of regulatory networks. Understanding these molecular and cellular differences at the systems level would improve patient stratification and facilitate the design of rational intervention strategies. Models of cellular regulatory networks frequently make weak assumptions about the distribution of model parameters across cell types or patients. These assumptions are usually expressed in the form of regularization of the objective function of the optimization problem. We propose a new method of regularization for network models of signaling pathways based on the local density of the inferred parameter values within the parameter space. Our method reduces the complexity of models by creating groups of cell line-specific parameters which can then be optimized together. We demonstrate the use of our method by recovering the correct topology and inferring accurate values of the parameters of a small synthetic model. To show the value of our method in a realistic setting, we re-analyze a recently published phosphoproteomic dataset from a panel of 14 colon cancer cell lines. We conclude that our method efficiently reduces model complexity and helps recovering context-specific regulatory information.

  2. Global exponential stability of cellular neural networks with continuously distributed delays and impulses

    International Nuclear Information System (INIS)

    Wang Yixuan; Xiong Wanmin; Zhou Qiyuan; Xiao Bing; Yu Yuehua

    2006-01-01

    In this Letter cellular neural networks with continuously distributed delays and impulses are considered. Sufficient conditions for the existence and global exponential stability of a unique equilibrium point are established by using the fixed point theorem and differential inequality techniques. The results of this Letter are new and they complement previously known results

  3. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  4. Addressing population heterogeneity and distribution in epidemics models using a cellular automata approach.

    Science.gov (United States)

    López, Leonardo; Burguerner, Germán; Giovanini, Leonardo

    2014-04-12

    The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. An epidemic is characterized trough an individual-based-model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the

  5. QoE-Driven D2D Media Services Distribution Scheme in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Mingkai Chen

    2017-01-01

    Full Text Available Device-to-device (D2D communication has been widely studied to improve network performance and considered as a potential technological component for the next generation communication. Considering the diverse users’ demand, Quality of Experience (QoE is recognized as a new degree of user’s satisfaction for media service transmissions in the wireless communication. Furthermore, we aim at promoting user’s Mean of Score (MOS value to quantify and analyze user’s QoE in the dynamic cellular networks. In this paper, we explore the heterogeneous media service distribution in D2D communications underlaying cellular networks to improve the total users’ QoE. We propose a novel media service scheme based on different QoE models that jointly solve the massive media content dissemination issue for cellular networks. Moreover, we also investigate the so-called Media Service Adaptive Update Scheme (MSAUS framework to maximize users’ QoE satisfaction and we derive the popularity and priority function of different media service QoE expression. Then, we further design Media Service Resource Allocation (MSRA algorithm to schedule limited cellular networks resource, which is based on the popularity function to optimize the total users’ QoE satisfaction and avoid D2D interference. In addition, numerical simulation results indicate that the proposed scheme is more effective in cellular network content delivery, which makes it suitable for various media service propagation.

  6. DESIGN OF STRUCTURAL ELEMENTS IN THE EVENT OF THE PRE-SET RELIABILITY, REGULAR LOAD AND BEARING CAPACITY DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Tamrazyan Ashot Georgievich

    2012-10-01

    Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.

  7. Cellular distribution of inorganic mercury and its relation to cytotoxicity in bovine kidney cell cultures

    International Nuclear Information System (INIS)

    Bracken, W.M.; Sharma, R.P.; Bourcier, D.R.

    1984-01-01

    A bovine kidney cell culture system was used to assess what relationship mercuric chloride (HgCl 2 ) uptake and subcellular distribution had to cytotoxicity. Twenty-four-hour incubations with 0.05-50 μM HgCl 2 elicited a concentration-related cytotoxicity. Cellular accumulation of 203 Hg was also concentration-related, with 1.0 nmol/10 6 cells at the IC50. Measurement of Hg uptake over the 24-h exposure period revealed a multiphasic process. Peak accumulation was attained by 1 h and was followed by extrusion and plateauing of intracellular Hg levels. Least-squares regression analysis of the cytotoxicity and cellular uptake data indicated a potential relationship between the Hg uptake and cytotoxicity. However, the subcellular distribution of Hg was not concentration-related. Mitochondria and soluble protein fractions accounted for greater than 65% of the cell-associated Hg at all concentrations. The remaining Hg was distributed between the microsomal (6-10%) and nuclear and cell debris (11-22%) fractions at all concentrations tested. Less than 20% of the total cell-associated Hg was bound with metallothionein-like protein. 31 references, 4 figures, 3 tables

  8. Secure Real-Time Monitoring and Management of Smart Distribution Grid using Shared Cellular Networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Ganem, Hervé; Jorguseski, Ljupco

    2017-01-01

    capabilities. Thanks to the advanced measurement devices, management framework, and secure communication infrastructure developed in the FP7 SUNSEED project, the Distribution System Operator (DSO) now has full observability of the energy flows at the medium/low voltage grid. Furthermore, the prosumers are able......, where the smart grid ICT solutions are provided through shared cellular LTE networks....

  9. Cellular and subcellular distribution of BSH in human glioblastoma multiforme

    International Nuclear Information System (INIS)

    Neumann, M.; Gabel, D.

    2000-01-01

    The cellular and subcellular distribution of mercaptoundecahydrododecaborate (BSH) in seven glioblastoma multiforme tissue sections of six patients having received BSH prior to surgery was investigated by light, fluorescence and electron microscopy. With use of specific antibodies against BSH its localization could be found in tissue sections predominantly (approx. 90%) in the cytoplasm of GFAP-positive cells of all but one patient. The latter was significantly younger (33 years in contrast of 46-71 (mean 60) years). In none of the tissue sections BSH could be found to a significant amount in the cell nuclei. In contrast, electron microscopy studies show BSH as well associated with the cell membrane as with the chromatin in the nucleus. (author)

  10. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  11. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    Science.gov (United States)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  12. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    Science.gov (United States)

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  13. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  14. Research on stress distribution regularity of cement sheaths of radial well based on ABAQUS

    Science.gov (United States)

    Shi, Jihui; Cheng, Yuanfang; Li, Xiaolong; Xiao, Wen; Li, Menglai

    2017-12-01

    To ensure desirable outcome of hydraulic fracturing based on ultra-short radius radial systems, it is required to investigate the stress distribution regularity and stability of the cement sheath. On the basis of the theoretical model of the cement sheath stress distribution, a reservoir mechanical model was built using the finite element software, ABAQUS, according to the physical property of a certain oil reservoir of the Shengli oilfield. The stress distribution of the casing-cement-sheath-formation system under the practical condition was simulated, based on which analyses were conducted from multiple points of view. Results show that the stress on the internal interface of the cement sheath exceeds that on the external interface, and fluctuates with higher amplitudes, which means that the internal interface is the most failure-prone. The unevenness of the cement sheath stress distribution grows with the increasing horizontal principal stress ratio, and so does the variation magnitude. This indicates that higher horizontal principal stress ratios are unfavourable for the structural stability of the cement sheath. Both the wellbore quantity of the URRS and the physical property of the material can affect the cement sheath distribution. It is suggested to optimize the quantity of the radial wellbore and use cement with a lower elastic modulus and higher Poisson’s ratio. At last, the impact level of the above factor was analysed, with the help of the grey correlation analysis.

  15. Effects of Initial Symmetry on the Global Symmetry of One-Dimensional Legal Cellular Automata

    Directory of Open Access Journals (Sweden)

    Ikuko Tanaka

    2015-09-01

    Full Text Available To examine the development of pattern formation from the viewpoint of symmetry, we applied a two-dimensional discrete Walsh analysis to a one-dimensional cellular automata model under two types of regular initial conditions. The amount of symmetropy of cellular automata (CA models under regular and random initial conditions corresponds to three Wolfram’s classes of CAs, identified as Classes II, III, and IV. Regular initial conditions occur in two groups. One group that makes a broken, regular pattern formation has four types of symmetry, whereas the other group that makes a higher hierarchy pattern formation has only two types. Additionally, both final pattern formations show an increased amount of symmetropy as time passes. Moreover, the final pattern formations are affected by iterations of base rules of CA models of chaos dynamical systems. The growth design formations limit possibilities: the ratio of developing final pattern formations under a regular initial condition decreases in the order of Classes III, II, and IV. This might be related to the difference in degree in reference to surrounding conditions. These findings suggest that calculations of symmetries of the structures of one-dimensional cellular automata models are useful for revealing rules of pattern generation for animal bodies.

  16. Quantifying the Sub-Cellular Distributions of Gold Nanospheres Uptaken by Cells through Stepwise, Site-Selective Etching.

    Science.gov (United States)

    Xia, Younan; Huo, Da

    2018-04-10

    A quantitative understanding of the sub-cellular distributions of nanoparticles uptaken by cells is important to the development of nanomedicine. With Au nanospheres as a model system, here we demonstrate, for the first time, how to quantify the numbers of nanoparticles bound to plasma membrane, accumulated in cytosol, and entrapped in lysosomes, respectively, through stepwise, site-selective etching. Our results indicate that the chance for nanoparticles to escape from lysosomes is insensitive to the presence of targeting ligand although ligand-receptor binding has been documented as a critical factor in triggering internalization. Furthermore, the presence of serum proteins is shown to facilitate the binding of nanoparticles to plasma membrane lacking the specific receptor. Collectively, these findings confirm the potential of stepwise etching in quantitatively analyzing the sub-cellular distributions of nanoparticles uptaken by cells in an effort to optimize the therapeutic effect. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Analysis of effect on microdose of 10B nonuniform distribution in cellular

    International Nuclear Information System (INIS)

    Xie Qin; Geng Changran; Tang Xiaobin; Chen Da

    2012-01-01

    Boron neutron capture therapy (BNCT) is one of the effective way to treat malignant melanoma and head-neck cancer. The intercellular nonuniform distributions of 10 B in tumor cell impact the estimates of inactivation dose. The α-Li Version l.0 code was developed based on Monte-Carlo method to calculate the S values of cell induced by α and 7 Li particle which are the products of 10 B (n,a) 7 Li. The calculation included two types of cell size, eight kinds of energy of a particle and three kinds of source distributions. Differences between results of this code and an analytical algorithm of MIRD committee were within 1%. On this basis, a total of 3420 cases were calculated and analyzed with different kinds of nucleus radius, cell radius, and source launch position combination. Finally, cellular S values of 10 B (n,a) 7 Li calculated in this paper can be used to compute the excellent precision dose under 10 B compound nonuniform distribution in intercellular scale. (authors)

  18. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  19. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    Science.gov (United States)

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  20. Dynamic PET scanning and compartmental model analysis to determine cellular level radiotracer distribution in vivo

    International Nuclear Information System (INIS)

    Smith, G.T.; Hubner, K.F.; Goodman, M.M.; Stubbs, J.B.

    1992-01-01

    Positron emission tomography (PET) has been used to measure tissue radiotracer concentration in vivo. Radiochemical distribution can be determined with compartmental model analysis. A two compartment model describes the kinetics of N-13 ammonia ( 13 NH 3 ) in the myocardium. The model consists of a vascular space, Q 1 and a space for 13 NH 3 bound within the tissue, Q 2 . Differential equations for the model can be written: X(t) = AX(t) + BU( t), Y(t)= CX(t)+ DU(t) (1) where X(t) is a column vector [Q 1 (t); Q 2 (t)], U(t) is the arterial input activity measured from the left ventricular blood pool, and Y(t) is the measured tissue activity using PET. Matrices A, B, C, and D are dependent on physiological parameters describing the kinetics of 13 NH 3 in the myocardium. Estimated parameter matrices in Equation 1 have been validated in dog experiments by measuring myocardial perfusion with dynamic PET scanning and intravenous injection of 13 NH 3 . Tracer concentrations for each compartment can be calculated by direct integration of Equation 1. If the cellular level distribution of each compartment is known, the concentration of tracer within the intracellular and extracellular space can be determined. Applications of this type of modeling include parameter estimation for measurement of physiological processes, organ level dosimetry, and determination of cellular radiotracer distribution

  1. Network-Assisted Distributed Fairness-Aware Interference Coordination for Device-to-Device Communication Underlaid Cellular Networks

    Directory of Open Access Journals (Sweden)

    Francis Boabang

    2017-01-01

    Full Text Available Device-to-device (D2D communication underlaid cellular network is considered a key integration feature in future cellular network. However, without properly designed interference management, the interference from D2D transmission tends to degrade the performance of cellular users and D2D pairs. In this work, we proposed a network-assisted distributed interference mitigation scheme to address this issue. Specifically, the base station (BS acts as a control agent that coordinates the cross-tier interference from D2D transmission through a taxation scheme. The cotier interference is controlled by noncooperative game amongst D2D pairs. In general, the outcome of noncooperative game is inefficient due to the selfishness of each player. In our game formulation, reference user who is the victim of cotier interference is factored into the payoff function of each player to obtain fair and efficient outcome. The existence, uniqueness of the Nash Equilibrium (NE, and the convergence of the proposed algorithm are characterized using Variational Inequality theory. Finally, we provide simulation results to evaluate the efficiency of the proposed algorithm.

  2. Multi-omic data integration enables discovery of hidden biological regularities

    DEFF Research Database (Denmark)

    Ebrahim, Ali; Brunk, Elizabeth; Tan, Justin

    2016-01-01

    Rapid growth in size and complexity of biological data sets has led to the 'Big Data to Knowledge' challenge. We develop advanced data integration methods for multi- level analysis of genomic, transcriptomic, ribosomal profiling, proteomic and fluxomic data. First, we show that pairwise integration...... of primary omics data reveals regularities that tie cellular processes together in Escherichia coli: the number of protein molecules made per mRNA transcript and the number of ribosomes required per translated protein molecule. Second, we show that genome- scale models, based on genomic and bibliomic data......, enable quantitative synchronization of disparate data types. Integrating omics data with models enabled the discovery of two novel regularities: condition invariant in vivo turnover rates of enzymes and the correlation of protein structural motifs and translational pausing. These regularities can...

  3. Cellular modeling of fault-tolerant multicomputers

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, G

    1987-01-01

    Work described was concerned with a novel method for investigation of fault tolerance in large regular networks of computers. Motivation was to provide a technique useful in rapid evaluation of highly reliable systems that exploit the low cost and ease of volume production of simple microcomputer components. First, a system model and simulator based upon cellular automata are developed. This model is characterized by its simplicity and ease of modification when adapting to new types of network. Second, in order to test and verify the predictive capabilities of the cellular system, a more-detailed simulation is performed based upon an existing computational model, that of the Transputer. An example application is used to exercise various systems designed using the cellular model. Using this simulator, experimental results are obtained both for existing well-understood configurations and for more novel types also developed here. In all cases it was found that the cellular model and simulator successfully predicted the ranking in reliability improvement of the systems studied.

  4. Predictability in cellular automata.

    Science.gov (United States)

    Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius

    2014-01-01

    Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case.

  5. A Novel Kernel-Based Regularization Technique for PET Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Abdelwahhab Boudjelal

    2017-06-01

    Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.

  6. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  7. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  8. The UL24 protein of herpes simplex virus 1 affects the sub-cellular distribution of viral glycoproteins involved in fusion

    Energy Technology Data Exchange (ETDEWEB)

    Ben Abdeljelil, Nawel; Rochette, Pierre-Alexandre; Pearson, Angela, E-mail: angela.pearson@iaf.inrs.ca

    2013-09-15

    Mutations in UL24 of herpes simplex virus type 1 can lead to a syncytial phenotype. We hypothesized that UL24 affects the sub-cellular distribution of viral glycoproteins involved in fusion. In non-immortalized human foreskin fibroblasts (HFFs) we detected viral glycoproteins B (gB), gD, gH and gL present in extended blotches throughout the cytoplasm with limited nuclear membrane staining; however, in HFFs infected with a UL24-deficient virus (UL24X), staining for the viral glycoproteins appeared as long, thin streaks running across the cell. Interestingly, there was a decrease in co-localized staining of gB and gD with F-actin at late times in UL24X-infected HFFs. Treatment with chemical agents that perturbed the actin cytoskeleton hindered the formation of UL24X-induced syncytia in these cells. These data support a model whereby the UL24 syncytial phenotype results from a mislocalization of viral glycoproteins late in infection. - Highlights: • UL24 affects the sub-cellular distribution of viral glycoproteins required for fusion. • Sub-cellular distribution of viral glycoproteins varies in cell-type dependent manner. • Drugs targeting actin microfilaments affect formation of UL24-related syncytia in HFFs.

  9. Extraction protocol and liquid chromatography/tandem mass spectrometry method for determining micelle-entrapped paclitaxel at the cellular and subcellular levels: Application to a cellular uptake and distribution study.

    Science.gov (United States)

    Zheng, Nan; Lian, Bin; Du, Wenwen; Xu, Guobing; Ji, Jiafu

    2018-01-01

    Paclitaxel-loaded polymeric micelles (PTX-PM) are commonly used as tumor-targeted nanocarriers and display outstanding antitumor features in clinic, but its accumulation and distribution in vitro are lack of investigation. It is probably due to the complex micellar system and its low concentration at the cellular or subcellular levels. In this study, we developed an improved extraction method, which was a combination of mechanical disruption and liquid-liquid extraction (LLE), to extract the total PTX from micelles in the cell lysate and subcellular compartments. An ultra-performance liquid chromatography tandem mass spectroscopy (UPLC-MS/MS) method was optimized to detect the low concentration of PTX at cellular and subcellular levels simultaneously, using docetaxel as internal standard (IS). The method was proved to release PTX totally from micelles (≥95.93%) with a consistent and reproducible extraction recovery (≥75.04%). Good linearity was obtained at concentrations ranging from 0.2 to 20ng/mL. The relative error (RE%) for accuracy varied from 0.68 to 7.56%, and the intra- and inter-precision (relative standard deviation, RSD%) was less than 8.64% and 13.14%, respectively. This method was fully validated and successfully applied to the cellular uptake and distribution study of PTX-loaded PLGA-PEG micelles in human breast cancer cells (MCF-7). Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  11. Fluid queues and regular variation

    NARCIS (Netherlands)

    O.J. Boxma (Onno)

    1996-01-01

    textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail

  12. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  13. An analysis of global robust stability of uncertain cellular neural networks with discrete and distributed delays

    International Nuclear Information System (INIS)

    Park, Ju H.

    2007-01-01

    This paper considers the robust stability analysis of cellular neural networks with discrete and distributed delays. Based on the Lyapunov stability theory and linear matrix inequality (LMI) technique, a novel stability criterion guaranteeing the global robust convergence of the equilibrium point is derived. The criterion can be solved easily by various convex optimization algorithms. An example is given to illustrate the usefulness of our results

  14. Spine labeling in MRI via regularized distribution matching.

    Science.gov (United States)

    Hojjat, Seyed-Parsa; Ayed, Ismail; Garvin, Gregory J; Punithakumar, Kumaradevan

    2017-11-01

    This study investigates an efficient (nearly real-time) two-stage spine labeling algorithm that removes the need for an external training while being applicable to different types of MRI data and acquisition protocols. Based solely on the image being labeled (i.e., we do not use training data), the first stage aims at detecting potential vertebra candidates following the optimization of a functional containing two terms: (i) a distribution-matching term that encodes contextual information about the vertebrae via a density model learned from a very simple user input, which amounts to a point (mouse click) on a predefined vertebra; and (ii) a regularization constraint, which penalizes isolated candidates in the solution. The second stage removes false positives and identifies all vertebrae and discs by optimizing a geometric constraint, which embeds generic anatomical information on the interconnections between neighboring structures. Based on generic knowledge, our geometric constraint does not require external training. We performed quantitative evaluations of the algorithm over a data set of 90 mid-sagittal MRI images of the lumbar spine acquired from 45 different subjects. To assess the flexibility of the algorithm, we used both T1- and T2-weighted images for each subject. A total of 990 structures were automatically detected/labeled and compared to ground-truth annotations by an expert. On the T2-weighted data, we obtained an accuracy of 91.6% for the vertebrae and 89.2% for the discs. On the T1-weighted data, we obtained an accuracy of 90.7% for the vertebrae and 88.1% for the discs. Our algorithm removes the need for external training while being applicable to different types of MRI data and acquisition protocols. Based on the current testing data, a subject-specific model density and generic anatomical information, our method can achieve competitive performances when applied to T1- and T2-weighted MRI images.

  15. Expression and cellular distribution of ubiquitin in response to injury in the developing spinal cord of Monodelphis domestica.

    Directory of Open Access Journals (Sweden)

    Natassya M Noor

    Full Text Available Ubiquitin, an 8.5 kDa protein associated with the proteasome degradation pathway has been recently identified as differentially expressed in segment of cord caudal to site of injury in developing spinal cord. Here we describe ubiquitin expression and cellular distribution in spinal cord up to postnatal day P35 in control opossums (Monodelphis domestica and in response to complete spinal transection (T10 at P7, when axonal growth through site of injury occurs, and P28 when this is no longer possible. Cords were collected 1 or 7 days after injury, with age-matched controls and segments rostral to lesion were studied. Following spinal injury ubiquitin levels (western blotting appeared reduced compared to controls especially one day after injury at P28. In contrast, after injury mRNA expression (qRT-PCR was slightly increased at P7 but decreased at P28. Changes in isoelectric point of separated ubiquitin indicated possible post-translational modifications. Cellular distribution demonstrated a developmental shift between earliest (P8 and latest (P35 ages examined, from a predominantly cytoplasmic immunoreactivity to a nuclear expression; staining level and shift to nuclear staining was more pronounced following injury, except 7 days after transection at P28. After injury at P7 immunostaining increased in neurons and additionally in oligodendrocytes at P28. Mass spectrometry showed two ubiquitin bands; the heavier was identified as a fusion product, likely to be an ubiquitin precursor. Apparent changes in ubiquitin expression and cellular distribution in development and response to spinal injury suggest an intricate regulatory system that modulates these responses which, when better understood, may lead to potential therapeutic targets.

  16. The Evolution of Frequency Distributions: Relating Regularization to Inductive Biases through Iterated Learning

    Science.gov (United States)

    Reali, Florencia; Griffiths, Thomas L.

    2009-01-01

    The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this…

  17. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  18. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  19. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  20. Fluid queues and regular variation

    NARCIS (Netherlands)

    Boxma, O.J.

    1996-01-01

    This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even

  1. Spatial distribution patterns of energy deposition and cellular radiation effects in lung tissue following simulated exposure to alpha particles

    International Nuclear Information System (INIS)

    Hofmann, W.; Crawford-Brown, D.J.

    1990-01-01

    Randomly oriented sections of rat tissue have been digitised to provide the contours of tissue-air interfaces and the locations of individual cell nuclei in the alveolated region of the lung. Sources of alpha particles with varying irradiation geometries and densities are simulated to compute the resulting random pattern of cellular irradiation, i.e. spatial coordinates, frequency, track length, and energy of traversals by the emitted alpha particles. Probabilities per unit track lengths, derived from experimental data on in vitro cellular inactivation and transformation, are then applied to the results of the alpha exposure simulations to yield an estimate of the number of both dead and viable transformed cells and their spatial distributions. If lung cancer risk is linearly related to the number of transformed cells, the carcinogenic risk for hot particles is always smaller than that for a uniform nuclide distribution of the same activity. (author)

  2. The Role of Instabilities on the Mechanical Response of Cellular Solids and Structures

    National Research Council Canada - National Science Library

    Kyriakides, S

    1997-01-01

    .... The relatively regular and periodic microstructure of these two-dimensional materials makes them excellent models for studying the mechanisms that govern the compressive response of cellular materials...

  3. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  4. Quantitative SPECT reconstruction for brain distribution with a non-uniform attenuation using a regularizing method

    International Nuclear Information System (INIS)

    Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.

    1983-06-01

    An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted

  5. Electrically heated 3D-macro cellular SiC structures for ignition and combustion application

    International Nuclear Information System (INIS)

    Falgenhauer, Ralf; Rambacher, Patrick; Schlier, Lorenz; Volkert, Jochen; Travitzky, Nahum; Greil, Peter; Weclas, Miroslaw

    2017-01-01

    Highlights: • 3D-printed macro cellular SiC structure. • Directly integrated electrically heated ignition element used in combustion reactor. • Experimental investigation of the ignition process. - Abstract: The paper describes different aspects of porous combustion reactor operation especially at cold start conditions. Under cold start conditions it is necessary to increase the internal energy of the combustion reactor, to accumulate enough energy inside its solid phase and to reach at least the ignition temperature on the reactors inner surface. The most practicable method to preheat a cold porous reactor is to use its surface as a flame holder and to apply free flame combustion as a heat source for the preheating process. This paper presents a new electrically heated ignition element, which gets integrated in a three dimensional macro-cellular SiSiC reactor structure. For the development of the ignition element it was assumed, that the element is made of the same material as the combustion reactor itself and is fully integrated within the three-dimensional macro-cellular structure of the combustion reactor. Additive manufacturing like three-dimensional (3D) printing permits the production of regular SiSiC structures with constant strut thickness and a defined current flow path. To get a controlled temperature distribution on the ignition element it is necessary to control the current density distribution in the three-dimensional macro-cellular reactor structure. The ignition element used is designed to be an electrical resistance in an electric current system, converting flowing current into heat with the goal to get the highest temperature in the ignition region (glow plug). First experiments show that the ignition element integrated in a combustion reactor exhibits high dynamics and can be heated to the temperatures much above 1000 °C in a very short time (approx. 800 ms) for current of I = 150 A.

  6. Probabilistic cellular automata.

    Science.gov (United States)

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  7. Variations and Regularities in the Hemispheric Distributions in Sunspot Groups of Various Classes

    Science.gov (United States)

    Gao, Peng-Xin

    2018-05-01

    The present study investigates the variations and regularities in the distributions in sunspot groups (SGs) of various classes in the northern and southern hemispheres from Solar Cycles (SCs) 12 to 23. Here, we use the separation scheme that was introduced by Gao, Li, and Li ( Solar Phys. 292, 124, 2017), which is based on A/U ( A is the corrected area of the SG, and U is the corrected umbral area of the SG), in order to separate SGs into simple SGs (A/U ≤ 4.5) and complex SGs (A/U > 6.2). The time series of Greenwich photoheliographic results from 1875 to 1976 (corresponding to complete SCs 12 - 20) and Debrecen photoheliographic data during the period 1974 - 2015 (corresponding to complete SCs 21 - 23) are used to show the distributions of simple and complex SGs in the northern and southern hemispheres. The main results we obtain are reported as follows: i) the larger of the maximum annual simple SG numbers in the two hemispheres and the larger of the maximum annual complex SG numbers in the two hemispheres occur in different hemispheres during SCs 12, 14, 18, and 19; ii) the relative changing trends of two curves - cumulative SG numbers in the northern and southern hemispheres - for simple SGs are different from those for complex SGs during SCs 12, 14, 18, and 21; and iii) there are discrepancies between the dominant hemispheres of simple and complex SGs for SCs 12, 14, 18, and 21.

  8. Numerical Study on Critical Wedge Angle of Cellular Detonation Reflections

    International Nuclear Information System (INIS)

    Gang, Wang; Kai-Xin, Liu; De-Liang, Zhang

    2010-01-01

    The critical wedge angle (CWA) for the transition from regular reflection (RR) to Mach reflection (MR) of a cellular detonation wave is studied numerically by an improved space-time conservation element and solution element method together with a two-step chemical reaction model. The accuracy of that numerical way is verified by simulating cellular detonation reflections at a 19.3° wedge. The planar and cellular detonation reflections over 45°–55° wedges are also simulated. When the cellular detonation wave is over a 50° wedge, numerical results show a new phenomenon that RR and MR occur alternately. The transition process between RR and MR is investigated with the local pressure contours. Numerical analysis shows that the cellular structure is the essential reason for the new phenomenon and the CWA of detonation reflection is not a certain angle but an angle range. (fundamental areas of phenomenology(including applications))

  9. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  10. Increasing cellular coverage within integrated terrestrial/satellite mobile networks

    Science.gov (United States)

    Castro, Jonathan P.

    1995-01-01

    When applying the hierarchical cellular concept, the satellite acts as giant umbrella cell covering a region with some terrestrial cells. If a mobile terminal traversing the region arrives to the border-line or limits of a regular cellular ground service, network transition occurs and the satellite system continues the mobile coverage. To adequately assess the boundaries of service of a mobile satellite system an a cellular network within an integrated environment, this paper provides an optimized scheme to predict when a network transition may be necessary. Under the assumption of a classified propagation phenomenon and Lognormal shadowing, the study applies an analytical approach to estimate the location of a mobile terminal based on a reception of the signal strength emitted by a base station.

  11. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  12. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  13. THE REGULARITIES OF THE SPACE-TEMPORAL DISTRIBUTION OF THE RADIATION BALANCE OF THE UNDERLYING SURFACE IN ARAKS BASIN ON MOUNTAINOUS TERRITORY OF THE REPUBLIC OF ARMENIA

    Directory of Open Access Journals (Sweden)

    V. G. Margaryan

    2017-12-01

    Full Text Available The regularities of the space-temporal distribution of the radiation balance of the underlying surface for the conditions of the mountainous territory of the Republic of Armenia were discussed and analyzed.

  14. Genus Ranges of 4-Regular Rigid Vertex Graphs.

    Science.gov (United States)

    Buck, Dorothy; Dolzhenko, Egor; Jonoska, Nataša; Saito, Masahico; Valencia, Karin

    2015-01-01

    A rigid vertex of a graph is one that has a prescribed cyclic order of its incident edges. We study orientable genus ranges of 4-regular rigid vertex graphs. The (orientable) genus range is a set of genera values over all orientable surfaces into which a graph is embedded cellularly, and the embeddings of rigid vertex graphs are required to preserve the prescribed cyclic order of incident edges at every vertex. The genus ranges of 4-regular rigid vertex graphs are sets of consecutive integers, and we address two questions: which intervals of integers appear as genus ranges of such graphs, and what types of graphs realize a given genus range. For graphs with 2 n vertices ( n > 1), we prove that all intervals [ a, b ] for all a genus ranges. For graphs with 2 n - 1 vertices ( n ≥ 1), we prove that all intervals [ a, b ] for all a genus ranges. We also provide constructions of graphs that realize these ranges.

  15. Iterative Method of Regularization with Application of Advanced Technique for Detection of Contours

    International Nuclear Information System (INIS)

    Niedziela, T.; Stankiewicz, A.

    2000-01-01

    This paper proposes a novel iterative method of regularization with application of an advanced technique for detection of contours. To eliminate noises, the properties of convolution of functions are utilized. The method can be accomplished in a simple neural cellular network, which creates the possibility of extraction of contours by automatic image recognition equipment. (author)

  16. Energy Distribution of a Regular Black Hole Solution in Einstein-Nonlinear Electrodynamics

    Directory of Open Access Journals (Sweden)

    I. Radinschi

    2015-01-01

    Full Text Available A study about the energy momentum of a new four-dimensional spherically symmetric, static and charged, regular black hole solution developed in the context of general relativity coupled to nonlinear electrodynamics is presented. Asymptotically, this new black hole solution behaves as the Reissner-Nordström solution only for the particular value μ=4, where μ is a positive integer parameter appearing in the mass function of the solution. The calculations are performed by use of the Einstein, Landau-Lifshitz, Weinberg, and Møller energy momentum complexes. In all the aforementioned prescriptions, the expressions for the energy of the gravitating system considered depend on the mass M of the black hole, its charge q, a positive integer α, and the radial coordinate r. In all these pseudotensorial prescriptions, the momenta are found to vanish, while the Landau-Lifshitz and Weinberg prescriptions give the same result for the energy distribution. In addition, the limiting behavior of the energy for the cases r→∞, r→0, and q=0 is studied. The special case μ=4 and α=3 is also examined. We conclude that the Einstein and Møller energy momentum complexes can be considered as the most reliable tools for the study of the energy momentum localization of a gravitating system.

  17. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  18. Endpoint singularities in unintegrated parton distributions

    CERN Document Server

    Hautmann, F

    2007-01-01

    We examine the singular behavior from the endpoint region x -> 1 in parton distributions unintegrated in both longitudinal and transverse momenta. We identify and regularize the singularities by using the subtraction method, and compare this with the cut-off regularization method. The counterterms for the distributions with subtractive regularization are given in coordinate space by compact all-order expressions in terms of eikonal-line operators. We carry out an explicit calculation at one loop for the unintegrated quark distribution. We discuss the relation of the unintegrated parton distributions in subtractive regularization with the ordinary parton distributions.

  19. Interaction of HSP20 with a viral RdRp changes its sub-cellular localization and distribution pattern in plants.

    Science.gov (United States)

    Li, Jing; Xiang, Cong-Ying; Yang, Jian; Chen, Jian-Ping; Zhang, Heng-Mu

    2015-09-11

    Small heat shock proteins (sHSPs) perform a fundamental role in protecting cells against a wide array of stresses but their biological function during viral infection remains unknown. Rice stripe virus (RSV) causes a severe disease of rice in Eastern Asia. OsHSP20 and its homologue (NbHSP20) were used as baits in yeast two-hybrid (YTH) assays to screen an RSV cDNA library and were found to interact with the viral RNA-dependent RNA polymerase (RdRp) of RSV. Interactions were confirmed by pull-down and BiFC assays. Further analysis showed that the N-terminus (residues 1-296) of the RdRp was crucial for the interaction between the HSP20s and viral RdRp and responsible for the alteration of the sub-cellular localization and distribution pattern of HSP20s in protoplasts of rice and epidermal cells of Nicotiana benthamiana. This is the first report that a plant virus or a viral protein alters the expression pattern or sub-cellular distribution of sHSPs.

  20. Cellular dosimetry in nuclear medicine imaging: training

    International Nuclear Information System (INIS)

    Gardin, I.; Faraggi, M.; Stievenart, J.L.; Le Guludec, D.; Bok, B.

    1998-01-01

    The radionuclides used in nuclear medicine imaging emit not only diagnostically useful photons, but also energy electron emissions, responsible for dose heterogeneity at the cellular level. The mean dose delivered to the cell nucleus by electron emissions of 99m Tc, 123 I, 111 In, 67 Ga, and 201 Tl, has been calculated, for the cell nucleus, a cytoplasmic and a cell membrane distribution of radioactivity. This model takes into account both the self-dose which results from the radionuclide located in the target cell, and the cross-dose, which comes from the surrounding cells. The results obtained by cellular dosimetry (D cel ) have been compared with those obtained with conventional dosimetry (D conv ), by assuming the same amount of radioactivity per cell. Cellular dosimetry shows, for a cytoplasmic and a cell membrane distributions of radioactivity, that the main contribution to the dose to the cell nucleus, comes from the surrounding cells. On the other hand, for a cell nucleus distribution of radioactivity, the self-dose is not negligible and may be the main contribution. The comparison between cellular and conventional dosimetry shows that D cel /D conv ratio ranges from 0.61 and O.89, in case of a cytoplasmic and a cell membrane distributions of radioactivity, depending on the radionuclide and cell dimensions. Thus, conventional dosimetry slightly overestimates the mean dose to the cell nucleus. On the other hand, D cel /D conv ranges from 1.1 to 75, in case of a cell nucleus distribution of radioactivity. Conventional dosimetry may strongly underestimates the absorbed dose to the nucleus, when radioactivity is located in the nucleus. The study indicates that in nuclear medicine imaging, cellular dosimetry may lead to a better understanding of biological effects of radiopharmaceuticals. (authors)

  1. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  2. Regular distributive efficiency and the distributive liberal social contract.

    OpenAIRE

    Jean Mercier Ythier

    2009-01-01

    We consider abstract social systems of private property, made of n individuals endowed with non-paternalistic interdependent preferences, who interact through exchanges on competitive markets and Pareto-efficient lumpsum transfers. The transfers follow from a distributive liberal social contract defined as a redistribution of initial endowments such that the resulting market equilibrium allocation is, both, Pareto-efficient relative to individual interdependent preferences, and unanimously we...

  3. Meteoroid velocity distribution derived from head echo data collected at Arecibo during regular world day observations

    Directory of Open Access Journals (Sweden)

    M. P. Sulzer

    2004-01-01

    Full Text Available We report the observation and analysis of ionization flashes associated with the decay of meteoroids (so-called head echos detected by the Arecibo 430 MHz radar during regular ionospheric observations in the spring and autumn equinoxes. These two periods allow pointing well-above and nearly-into the ecliptic plane at dawn when the event rate maximizes. The observation of many thousands of events allows a statistical interpretation of the results, which show that there is a strong tendency for the observed meteoroids to come from the apex as has been previously reported (Chau and Woodman, 2004. The velocity distributions agree with Janches et al. (2003a when they are directly comparable, but the azimuth scan used in these observations allows a new perspective. We have constructed a simple statistical model which takes meteor velocities as input and gives radar line of sight velocities as output. The intent is to explain the fastest part of the velocity distribution. Since the speeds interpreted from the measurements are distributed fairly narrowly about nearly 60 km s-1, double the speed of the earth in its orbit, is consistent with the interpretation that many of the meteoroids seen by the Arecibo radar are moving in orbits about the sun with similar parameters as the earth, but in the retrograde direction. However, it is the directional information obtained from the beam-swinging radar experiment and the speed that together provide the evidence for this interpretation. Some aspects of the measured velocity distributions suggest that this is not a complete description even for the fast part of the distribution, and it certainly says nothing about the slow part first described in Janches et al. (2003a. Furthermore, we cannot conclude anything about the entire dust population since there are probably selection effects that restrict the observations to a subset of the population.

  4. Clustering and cellular distribution characteristics of virus particles of Tomato spotted wilt virus and Tomato zonate spot virus in different plant hosts.

    Science.gov (United States)

    Zhang, Zhongkai; Zheng, Kuanyu; Dong, Jiahong; Fang, Qi; Hong, Jian; Wang, Xifeng

    2016-01-19

    Tomato spotted wilt virus (TSWV) and Tomato zonate spot virus (TZSV) are the two dominant species of thrip-transmitted tospoviruses, cause significant losses in crop yield in Yunnan and its neighboring provinces in China. TSWV and TZSV belong to different serogroup of tospoviruses but induce similar symptoms in the same host plant species, which makes diagnostic difficult. We used different electron microscopy preparing methods to investigate clustering and cellular distribution of TSWV and TZSV in the host plant species. Negative staining of samples infected with TSWV and TZSV revealed that particles usually clustered in the vesicles, including single particle (SP), double particles clustering (DPC), triple particles clustering (TPC). In the immunogold labeling negative staining against proteins of TZSV, the antibodies against Gn protein were stained more strongly than the N protein. Ultrathin section and high pressure freeze (HPF)-electron microscopy preparations revealed that TSWV particles were distributed in the cisternae of endoplasmic reticulum (ER), filamentous inclusions (FI) and Golgi bodies in the mesophyll cells. The TSWV particles clustered as multiple particles clustering (MPC) and distributed in globular viroplasm or cisternae of ER in the top leaf cell. TZSV particles were distributed more abundantly in the swollen membrane of ER in the mesophyll cell than those in the phloem parenchyma cells and were not observed in the top leaf cell. However, TZSV virions were mainly present as single particle in the cytoplasm, with few clustering as MPC. In this study, we identified TSWV and TZSV particles had the distinct cellular distribution patterns in the cytoplasm from different tissues and host plants. This is the first report of specific clustering characteristics of tospoviruses particles as well as the cellular distribution of TSWV particles in the FI and globular viroplasm where as TZSV particles inside the membrane of ER. These results indicated that

  5. Rapid establishment of a regular distribution of adult tropical Drosophila parasitoids in a multi-patch environment by patch defence behaviour.

    Science.gov (United States)

    de Jong, Peter W; Hemerik, Lia; Gort, Gerrit; van Alphen, Jacques J M

    2011-01-01

    Females of the larval parasitoid of Drosophila, Asobara citri, from sub-Saharan Africa, defend patches with hosts by fighting and chasing conspecific females upon encounter. Females of the closely related, palearctic species Asobara tabida do not defend patches and often search simultaneously in the same patch. The effect of patch defence by A. citri females on their distribution in a multi-patch environment was investigated, and their distributions were compared with those of A. tabida. For both species 20 females were released from two release-points in replicate experiments. Females of A. citri quickly reached a regular distribution across 16 patches, with a small variance/mean ratio per patch. Conversely, A. tabida females initially showed a clumped distribution, and after gradual dispersion, a more Poisson-like distribution across patches resulted (variance/mean ratio was closer to 1 and higher than for A. citri). The dispersion of A. tabida was most probably an effect of exploitation: these parasitoids increasingly made shorter visits to already exploited patches. We briefly discuss hypotheses on the adaptive significance of patch defence behaviour or its absence in the light of differences in the natural history of both parasitoid species, notably the spatial distribution of their hosts.

  6. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  7. Cellular vs. organ approaches to dose estimates

    International Nuclear Information System (INIS)

    Adelstein, S.J.; Kassis, A.I.; Sastry, K.S.R.

    1986-01-01

    The cellular distribution of tissue-incorporated radionuclides has generally been neglected in the dosimetry of internal emitters. Traditional dosimetry assumes homogeneous distribution of radionuclides in organs of interest, while presuming that the ranges of particulate radiations are large relative to typical cell diameters. The macroscopic distribution of dose thus calculated has generally served as a sufficient approximation for the energy deposited within radiosensitive sites. However, with the increasing utilization of intracellular agents, such as thallium-201, it has become necessary to examine the microscopic distribution of energy at the cellular level. This is particularly important in the instance of radionuclides that decay by electron capture or by internal conversion with the release of Auger and Coster-Kronig electrons. In many instances, these electrons are released as a dense shower of low-energy particles with ranges of subcellular dimensions. The high electron density in the immediate vicinity of the decaying atom produces a focal deposition of energy that far exceeds the average dose taken over several cell diameters. These studies point out the increasing need to take into account the microscopic distribution of dose on the cellular level as radionuclides distributed in cells become more commonplace, especially if the decay involves electron capture or internal conversion. As radiotracers are developed for the measurement of intracellular functions these factors should be given greater consideration. 16 references, 5 figures, 5 tables

  8. Clustered, regularly interspaced short palindromic repeat (CRISPR) diversity and virulence factor distribution in avian Escherichia coli.

    Science.gov (United States)

    Fu, Qiang; Su, Zhixin; Cheng, Yuqiang; Wang, Zhaofei; Li, Shiyu; Wang, Heng'an; Sun, Jianhe; Yan, Yaxian

    In order to investigate the diverse characteristics of clustered, regularly interspaced short palindromic repeat (CRISPR) arrays and the distribution of virulence factor genes in avian Escherichia coli, 80 E. coli isolates obtained from chickens with avian pathogenic E. coli (APEC) or avian fecal commensal E. coli (AFEC) were identified. Using the multiplex polymerase chain reaction (PCR), five genes were subjected to phylogenetic typing and examined for CRISPR arrays to study genetic relatedness among the strains. The strains were further analyzed for CRISPR loci and virulence factor genes to determine a possible association between their CRISPR elements and their potential virulence. The strains were divided into five phylogenetic groups: A, B1, B2, D and E. It was confirmed that two types of CRISPR arrays, CRISPR1 and CRISPR2, which contain up to 246 distinct spacers, were amplified in most of the strains. Further classification of the isolates was achieved by sorting them into nine CRISPR clusters based on their spacer profiles, which indicates a candidate typing method for E. coli. Several significant differences in invasion-associated gene distribution were found between the APEC isolates and the AFEC isolates. Our results identified the distribution of 11 virulence genes and CRISPR diversity in 80 strains. It was demonstrated that, with the exception of iucD and aslA, there was no sharp demarcation in the gene distribution between the pathogenic (APEC) and commensal (AFEC) strains, while the total number of indicated CRISPR spacers may have a positive correlation with the potential pathogenicity of the E. coli isolates. Copyright © 2016. Published by Elsevier Masson SAS.

  9. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  10. Some regularities in the distribution of kenophytes in the Polish Carpathians and their foreland

    Directory of Open Access Journals (Sweden)

    Zając Maria

    2015-03-01

    Full Text Available The Polish Carpathians and their northern foreland are a rewarding object for the kenophyte distribution research. The study, using the cartogram method, showed that the number of kenophyte species decreases with increasing altitude. Only few kenophytes were found in the lower forest zone. This regularity concerns also the species that reach higher altitudes in the mountains of their native lands. A number of species migrated into the Carpathians through rivers and streams. River valleys generate many open habitats, which are easily colonized by kenophytes due to the lack of competition. In the Carpathians, towns used to be founded in the mountain valleys and this was also a favouring factor of kenophyte propagation. The arrangement of mountain ranges in the Polish Carpathians, including their foreland, hindered the migration of some species and allowed to discover the possible migration routes into the area covered by research. Tracing these migration routes was possible only for those species that have not occupied the whole available area yet. Additionally, the study indicated the most dangerous invasive species in the Polish Carpathians and their foreland.

  11. Distinct cellular and subcellular distributions of G protein-coupled receptor kinase and arrestin isoforms in the striatum.

    Directory of Open Access Journals (Sweden)

    Evgeny Bychkov

    Full Text Available G protein-coupled receptor kinases (GRKs and arrestins mediate desensitization of G protein-coupled receptors (GPCR. Arrestins also mediate G protein-independent signaling via GPCRs. Since GRK and arrestins demonstrate no strict receptor specificity, their functions in the brain may depend on their cellular complement, expression level, and subcellular targeting. However, cellular expression and subcellular distribution of GRKs and arrestins in the brain is largely unknown. We show that GRK isoforms GRK2 and GRK5 are similarly expressed in direct and indirect pathway neurons in the rat striatum. Arrestin-2 and arrestin-3 are also expressed in neurons of both pathways. Cholinergic interneurons are enriched in GRK2, arrestin-3, and GRK5. Parvalbumin-positive interneurons express more of GRK2 and less of arrestin-2 than medium spiny neurons. The GRK5 subcellular distribution in the human striatal neurons is altered by its phosphorylation: unphosphorylated enzyme preferentially localizes to synaptic membranes, whereas phosphorylated GRK5 is found in plasma membrane and cytosolic fractions. Both GRK isoforms are abundant in the nucleus of human striatal neurons, whereas the proportion of both arrestins in the nucleus was equally low. However, overall higher expression of arrestin-2 yields high enough concentration in the nucleus to mediate nuclear functions. These data suggest cell type- and subcellular compartment-dependent differences in GRK/arrestin-mediated desensitization and signaling.

  12. Application of a cellular automaton for the evolution of etched nuclear tracks

    International Nuclear Information System (INIS)

    Cruz-Trujillo, Leonardo de la; Hernández-Hernández, C.; Vázquez-López, C.; Zendejas-Leal, B.E.; Golzarri, I.; Espinosa, G.

    2013-01-01

    In the present work, it is demonstrated the first application of cellular automata to the growing of etched nuclear tracks. The simplest case in which conical etched tracks are gradually formed is presented, as well as a general case of time varying etching rate V t . It is demonstrated that the cellular automata elements consist in an image pattern of the latent nuclear track input cells, 16 rules for updating states, the Moore neighborhood and an algorithm of four states. - Highlights: ► We model the evolution of an etched nuclear track using cellular automata (ca). ► A cellular automaton of a conical track has 4 states and 16 transition rules. ► The ca of general tracks require a not regular mesh and the L(t) and V b parameters

  13. A local cellular model for growth on quasicrystals

    International Nuclear Information System (INIS)

    Chidyagwai, Prince; Reiter, Clifford A.

    2005-01-01

    The growth of real valued cellular automata using a deterministic algorithm on 2-dimensional quasicrystalline structures is investigated. Quasicrystals are intermediate between the rigid organization of crystals and disorganized random structures. Since the quasicrystalline structures may be highly symmetric or not, we are able to obtain highly organized and relatively random growth patterns. This deterministic growth produces dendrite, sector, stellar, regular polygons, round, and random DLA-like structures

  14. Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model

    Science.gov (United States)

    Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.

    2018-04-01

    The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.

  15. Geometrical bucklings for two-dimensional regular polygonal regions using the finite Fourier transformation

    International Nuclear Information System (INIS)

    Mori, N.; Kobayashi, K.

    1996-01-01

    A two-dimensional neutron diffusion equation is solved for regular polygonal regions by the finite Fourier transformation, and geometrical bucklings are calculated for regular 3-10 polygonal regions. In the case of the regular triangular region, it is found that a simple and rigorous analytic solution is obtained for the geometrical buckling and the distribution of the neutron current along the outer boundary. (author)

  16. Effect of solid distribution on elastic properties of open-cell cellular solids using numerical and experimental methods.

    Science.gov (United States)

    Zargarian, A; Esfahanian, M; Kadkhodapour, J; Ziaei-Rad, S

    2014-09-01

    Effect of solid distribution between edges and vertices of three-dimensional cellular solid with an open-cell structure was investigated both numerically and experimentally. Finite element analysis (FEA) with continuum elements and appropriate periodic boundary condition was employed to calculate the elastic properties of cellular solids using tetrakaidecahedral (Kelvin) unit cell. Relative densities between 0.01 and 0.1 and various values of solid fractions were considered. In order to validate the numerical model, three scaffolds with the relative density of 0.08, but different amounts of solid in vertices, were fabricated via 3-D printing technique. Good agreement was observed between numerical simulation and experimental results. Results of numerical simulation showed that, at low relative densities (solid fraction in vertices. By fitting a curve to the data obtained from the numerical simulation and considering the relative density and solid fraction in vertices, empirical relations were derived for Young׳s modulus and Poisson׳s ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  18. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    Science.gov (United States)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  19. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    Science.gov (United States)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards

  20. Toward robust high resolution fluorescence tomography: a hybrid row-action edge preserving regularization

    Science.gov (United States)

    Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali

    2011-02-01

    Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.

  1. Formation factor of regular porous pattern in poly-α-methylstyrene film

    International Nuclear Information System (INIS)

    Yang Ruizhuang; Xu Jiajing; Gao Cong; Ma Shuang; Chen Sufen; Luo Xuan; Fang Yu; Li Bo

    2015-01-01

    Regular poly-α-methylstyrene (PAMS) porous film with macron-sized cells was prepared by casting the solution in the condition with high humidity. In this paper, the effects of the molecular weight of PAMS, PAMS concentration, humidity, temperature, volatile solvents and the thickness of liquid of solution on formation of regular porous pattern in PAMS film were discussed. The results show that these factors significantly affect the pore size and the pore distribution. The capillary force and Benard-Marangoni convection are main driving forces for the water droplet moving and making pores regular arrangement. (authors)

  2. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    Science.gov (United States)

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.

  3. Connection machine: a computer architecture based on cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Hillis, W D

    1984-01-01

    This paper describes the connection machine, a programmable computer based on cellular automata. The essential idea behind the connection machine is that a regular locally-connected cellular array can be made to behave as if the processing cells are connected into any desired topology. When the topology of the machine is chosen to match the topology of the application program, the result is a fast, powerful computing engine. The connection machine was originally designed to implement knowledge retrieval operations in artificial intelligence programs, but the hardware and the programming techniques are apparently applicable to a much larger class of problems. A machine with 100000 processing cells is currently being constructed. 27 references.

  4. CAC DPLB MCN: A Distributed Load Balancing Scheme in Multimedia Mobile Cellular Networks

    Directory of Open Access Journals (Sweden)

    Sharma Abhijit

    2016-11-01

    Full Text Available The problem of non-uniform traffic demand in different cells of a cellular network may lead to a gross imbalance in the system performance. Thus, the users in hot cells may suffer from low throughput. In this paper, an effective and simple load balancing scheme CAC_DPLB_MCN is proposed that can effectively reduce the overall call blocking. This model considers dealing with multi-media traffic as well as time-varying geographical traffic distribution. The proposed scheme uses the concept of cell-tiering thereby creating fractional frequency reuse environment. A message exchange based distributed scheme instead of centralized one is used which help the proposed scheme be implemented in a multiple hot cell environment also. Furthermore, concept of dynamic pricing is used to serve the best interest of the users as well as for the service providers. The performance of the proposed scheme is compared with two other existing schemes in terms of call blocking probability and bandwidth utilization. Simulation results show that the proposed scheme can reduce the call blocking significantly in highly congested cell with highest bandwidth utilization. Use of dynamic pricing also makes the scheme useful to increase revenue of the service providers in contrast with compared schemes.

  5. The regularized monotonicity method: detecting irregular indefinite inclusions

    DEFF Research Database (Denmark)

    Garde, Henrik; Staboulis, Stratos

    2018-01-01

    inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...

  6. Effect of inhomogeneous activity distributions and airway geometry on cellular doses in radon lung dosimetry

    International Nuclear Information System (INIS)

    Szoke, Istvan; Balashazy, Imre; Farkas, Arpad; Hofmann, Werner

    2007-01-01

    The human tracheobronchial system has a very complex structure including cylindrical airway ducts connected by airway bifurcation units. The deposition of the inhaled aerosols within the airways exhibits a very inhomogeneous pattern. The formation of deposition hot spots near the carinal ridge has been confirmed by experimental and computational fluid and particle dynamics (CFPD) methods. In spite of these observations, current radon lung dosimetry models apply infinitely long cylinders as models of the airway system and assume uniform deposition of the inhaled radon progenies along the airway walls. The aim of this study is to investigate the effect of airway geometry and non-uniform activity distributions within bronchial bifurcations on cellular dose distributions. In order to answer these questions, the nuclear doses of the bronchial epithelium were calculated in three different irradiation situations. (1) First, CFPD methods were applied to calculate the distribution of the deposited alpha-emitting nuclides in a numerically constructed idealized airway bifurcation. (2) Second, the deposited radionuclides were randomly distributed along the surface of the above-mentioned geometry. (3) Finally, calculations were made in cylindrical geometries corresponding to the parent and daughter branches of the bifurcation geometry assuming random nuclide activity distribution. In all three models, the same 218 Po and 214 Po surface activities per tissue volumes were assumed. Two conclusions can be drawn from this analysis: (i) average nuclear doses are very similar in all three cases (minor differences can be attributed to differences in the linear energy transfer (LET) spectra) and (ii) dose distributions are significantly different in all three cases, with the highest doses at the carinal ridge in case 3. (authors)

  7. Numerical simulations of cellular detonation diffraction in a stable gaseous mixture

    Directory of Open Access Journals (Sweden)

    Jian Li

    2016-09-01

    Full Text Available In this paper, the diffraction phenomenon of gaseous cellular detonations emerging from a confined tube into a sudden open space is simulated using the reactive Euler equations with a two-step Arrhenius chemistry model. Both two-dimensional and axisymmetric configurations are used for modeling cylindrical and spherical expansions, respectively. The chemical parameters are chosen for a stable gaseous explosive mixture in which the cellular detonation structure is highly regular. Adaptive mesh refinement (AMR is used to resolve the detonation wave structure and its evolution during the transmission. The numerical results show that the critical channel width and critical diameter over the detonation cell size are about 13±1 and 25±1, respectively. These numerical findings are comparable with the experimental observation and confirm again that the critical channel width and critical diameter differ essentially by a factor close to 2, equal to the geometrical scaling based on front curvature theory. Unlike unstable mixtures where instabilities manifested in the detonation front structure play a significant role during the transmission, the present numerical results and the observed geometrical scaling provide again evidence that the failure of detonation diffraction in stable mixtures with a regular detonation cellular pattern is dominantly caused by the global curvature due to the wave divergence resulting in the global decoupling of the reaction zone with the expanding shock front.

  8. Numerical calculations of effective elastic properties of two cellular structures

    International Nuclear Information System (INIS)

    Tuncer, Enis

    2005-01-01

    Young's moduli of regular two-dimensional truss-like and eye-shaped structures are simulated using the finite element method. The structures are idealizations of soft polymeric materials used in ferro-electret applications. In the simulations, the length scales of the smallest representative units are varied, which changes the dimensions of the cell walls in the structures. A power-law expression with a quadratic as the exponent term is proposed for the effective Young's moduli of the systems as a function of the solid volume fraction. The data are divided into three regions with respect to the volume fraction: low, intermediate and high. The parameters of the proposed power-law expression in each region are later represented as a function of the structural parameters, the unit-cell dimensions. The expression presented can be used to predict a structure/property relationship in materials with similar cellular structures. The contribution of the cell-wall thickness to the elastic properties becomes significant at concentrations >0.15. The cell-wall thickness is the most significant factor in predicting the effective Young's modulus of regular cellular structures at high volume fractions of solid. At lower concentrations of solid, the eye-shaped structure yields a lower Young's modulus than a truss-like structure with similar anisotropy. Comparison of the numerical results with those of experimental data for poly(propylene) show good agreement regarding the influence of cell-wall thickness on elastic properties of thin cellular films

  9. Agent-Based Modeling of Mitochondria Links Sub-Cellular Dynamics to Cellular Homeostasis and Heterogeneity.

    Directory of Open Access Journals (Sweden)

    Giovanni Dalmasso

    Full Text Available Mitochondria are semi-autonomous organelles that supply energy for cellular biochemistry through oxidative phosphorylation. Within a cell, hundreds of mobile mitochondria undergo fusion and fission events to form a dynamic network. These morphological and mobility dynamics are essential for maintaining mitochondrial functional homeostasis, and alterations both impact and reflect cellular stress states. Mitochondrial homeostasis is further dependent on production (biogenesis and the removal of damaged mitochondria by selective autophagy (mitophagy. While mitochondrial function, dynamics, biogenesis and mitophagy are highly-integrated processes, it is not fully understood how systemic control in the cell is established to maintain homeostasis, or respond to bioenergetic demands. Here we used agent-based modeling (ABM to integrate molecular and imaging knowledge sets, and simulate population dynamics of mitochondria and their response to environmental energy demand. Using high-dimensional parameter searches we integrated experimentally-measured rates of mitochondrial biogenesis and mitophagy, and using sensitivity analysis we identified parameter influences on population homeostasis. By studying the dynamics of cellular subpopulations with distinct mitochondrial masses, our approach uncovered system properties of mitochondrial populations: (1 mitochondrial fusion and fission activities rapidly establish mitochondrial sub-population homeostasis, and total cellular levels of mitochondria alter fusion and fission activities and subpopulation distributions; (2 restricting the directionality of mitochondrial mobility does not alter morphology subpopulation distributions, but increases network transmission dynamics; and (3 maintaining mitochondrial mass homeostasis and responding to bioenergetic stress requires the integration of mitochondrial dynamics with the cellular bioenergetic state. Finally, (4 our model suggests sources of, and stress conditions

  10. Cellular metabolism

    International Nuclear Information System (INIS)

    Hildebrand, C.E.; Walters, R.A.

    1977-01-01

    Progress is reported on the following research projects: chromatin structure; the use of circular synthetic polydeoxynucleotides as substrates for the study of DNA repair enzymes; human cellular kinetic response following exposure to DNA-interactive compounds; histone phosphorylation and chromatin structure in cell proliferation; photoaddition products induced in chromatin by uv light; pollutants and genetic information transfer; altered RNA metabolism as a function of cadmium accumulation and intracellular distribution in cultured cells; and thymidylate chromophore destruction by water free radicals

  11. Manifold regularized multitask feature learning for multimodality disease classification.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-02-01

    Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. © 2014 Wiley Periodicals, Inc.

  12. An improved multi-value cellular automata model for heterogeneous bicycle traffic flow

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Sheng [College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, 310058 China (China); Qu, Xiaobo [Griffith School of Engineering, Griffith University, Gold Coast, 4222 Australia (Australia); Xu, Cheng [Department of Transportation Management Engineering, Zhejiang Police College, Hangzhou, 310053 China (China); College of Transportation, Jilin University, Changchun, 130022 China (China); Ma, Dongfang, E-mail: mdf2004@zju.edu.cn [Ocean College, Zhejiang University, Hangzhou, 310058 China (China); Wang, Dianhai [College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, 310058 China (China)

    2015-10-16

    This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model. - Highlights: • We proposed an improved multi-value CA model with higher maximum speed. • Update rules are introduced for heterogeneous bicycle traffic with maximum speed 2 and 3 cells/s. • Simulation results of the proposed model are consistent with field bicycle data. • Slowdown probabilities of both regular and electric bicycles are calibrated.

  13. An improved multi-value cellular automata model for heterogeneous bicycle traffic flow

    International Nuclear Information System (INIS)

    Jin, Sheng; Qu, Xiaobo; Xu, Cheng; Ma, Dongfang; Wang, Dianhai

    2015-01-01

    This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model. - Highlights: • We proposed an improved multi-value CA model with higher maximum speed. • Update rules are introduced for heterogeneous bicycle traffic with maximum speed 2 and 3 cells/s. • Simulation results of the proposed model are consistent with field bicycle data. • Slowdown probabilities of both regular and electric bicycles are calibrated

  14. The relationship between synchronization and percolation for regular networks

    Science.gov (United States)

    Li, Zhe; Ren, Tao; Xu, Yanjie; Jin, Jianyu

    2018-02-01

    Synchronization and percolation are two essential phenomena in complex dynamical networks. They have been studied widely, but previously treated as unrelated. In this paper, the relationship between synchronization and percolation are revealed for regular networks. Firstly, we discovered a bridge between synchronization and percolation by using the eigenvalues of the Laplacian matrix to describe the synchronizability and using the eigenvalues of the adjacency matrix to describe the percolation threshold. Then, we proposed a method to find the relationship for regular networks based on the topology of networks. Particularly, if the degree distribution of the network is subject to delta function, we show that only the eigenvalues of the adjacency matrix need to be calculated. Finally, several examples are provided to demonstrate how to apply our proposed method to discover the relationship between synchronization and percolation for regular networks.

  15. CELLULAR DISTRIBUTION STUDIES OF THE NITRIC OXIDE-GENERATING ANTINEOPLASTIC PRODRUG JS-K, FORMULATED IN PLURONIC P123 MICELLES

    Science.gov (United States)

    Kaur, Imit; Terrazas, Moises; Kosak, Ken M.; Kern, Steven E.; Boucher, Kenneth M.; Shami, Paul J.

    2014-01-01

    Objective Nitric oxide (NO) possesses anti-tumor activity. It induces differentiation and apoptosis in acute myeloid leukemia (AML) cells. The NO prodrug O2-(2,4-dinitrophenyl)1-[(4-ethoxycarbonyl)piperazin-1-yl]diazen-1-ium-1,2-diolate, or JS-K, has potent antileukemic activity. JS-K is also active in vitro and in vivo against multiple myeloma, prostate cancer, non-small cell lung cancer, glioma and liver cancer. Using the Pluronic® P123 polymer, we have developed a micelle formulation for JS-K in order to increase its solubility and stability. The goal of the current study was to investigate the cellular distribution of JS-K in AML cells. Methods We investigated the intracellular distribution of JS-K (free drug) and JS-K formulated in P123 micelles (P123/JS-K) using HL-60 AML cells. We also studied the S-glutathionylating effects of JS-K on proteins in the cytoplasmic and nuclear cellular fractions. Key findings Both free JS-K and P123/JS-K accumulate primarily in the nucleus. Both free JS-K and P123/JS-K induced S-glutathionylation of nuclear proteins, although the effect produced was more pronounced with P123/JS-K. Minimal S-glutathionylation of cytoplasmic proteins was observed. Conclusions We conclude that a micelle formulation of JS-K increases its accumulation in the nucleus. Post-translational protein modification through S-glutathionylation may contribute to JS-K’s anti-leukemic properties. PMID:23927471

  16. Skeletal muscle cellularity and glycogen distribution in the hypermuscular Compact mice

    Directory of Open Access Journals (Sweden)

    T. Kocsis

    2014-07-01

    Full Text Available Normal 0 21 false false false HU X-NONE X-NONE MicrosoftInternetExplorer4 The TGF-beta member myostatin acts as a negative regulator of skeletal muscle mass. The Compact mice were selected for high protein content and hypermuscularity, and carry a naturally occurring 12-bp deletion in the propeptide region of the myostatin precursor. We aimed to investigate the cellular characteristics and the glycogen distribution of the Compact tibialis anterior (TA muscle by quantitative histochemistry and spectrophotometry. We have found that the deficiency in myostatin resulted in significantly increased weight of the investigated hindlimb muscles compared to wild type. Although the average glycogen content of the individual fibers kept unchanged, the total amount of glycogen in the Compact TA muscle increased two-fold, which can be explained by the presence of more fibers in Compact compared to wild type muscle. Moreover, the ratio of the most glycolytic IIB fibers significantly increased in the Compact TA muscle, of which glycogen content was the highest among the fast fibers. In summary, myostatin deficiency caused elevated amount of glycogen in the TA muscle but did not increase the glycogen content of the individual fibers despite the marked glycolytic shift observed in Compact mice.

  17. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  18. Changes in weight loss, body composition and cardiovascular disease risk after altering macronutrient distributions during a regular exercise program in obese women

    Directory of Open Access Journals (Sweden)

    Roberts Mike D

    2010-11-01

    Full Text Available Abstract Background This study's purpose investigated the impact of different macronutrient distributions and varying caloric intakes along with regular exercise for metabolic and physiological changes related to weight loss. Methods One hundred forty-one sedentary, obese women (38.7 ± 8.0 yrs, 163.3 ± 6.9 cm, 93.2 ± 16.5 kg, 35.0 ± 6.2 kg•m-2, 44.8 ± 4.2% fat were randomized to either no diet + no exercise control group (CON a no diet + exercise control (ND, or one of four diet + exercise groups (high-energy diet [HED], very low carbohydrate, high protein diet [VLCHP], low carbohydrate, moderate protein diet [LCMP] and high carbohydrate, low protein [HCLP] in addition to beginning a 3x•week-1 supervised resistance training program. After 0, 1, 10 and 14 weeks, all participants completed testing sessions which included anthropometric, body composition, energy expenditure, fasting blood samples, aerobic and muscular fitness assessments. Data were analyzed using repeated measures ANOVA with an alpha of 0.05 with LSD post-hoc analysis when appropriate. Results All dieting groups exhibited adequate compliance to their prescribed diet regimen as energy and macronutrient amounts and distributions were close to prescribed amounts. Those groups that followed a diet and exercise program reported significantly greater anthropometric (waist circumference and body mass and body composition via DXA (fat mass and % fat changes. Caloric restriction initially reduced energy expenditure, but successfully returned to baseline values after 10 weeks of dieting and exercising. Significant fitness improvements (aerobic capacity and maximal strength occurred in all exercising groups. No significant changes occurred in lipid panel constituents, but serum insulin and HOMA-IR values decreased in the VLCHP group. Significant reductions in serum leptin occurred in all caloric restriction + exercise groups after 14 weeks, which were unchanged in other non

  19. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  20. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  1. Regular Topographic Patterning of Karst Depressions Suggests Landscape Self-Organization

    Science.gov (United States)

    Quintero, C.; Cohen, M. J.

    2017-12-01

    Thousands of wetland depressions that are commonly host to cypress domes dot the sub-tropical limestone landscape of South Florida. The origin of these depression features has been the topic of debate. Here we build upon the work of previous surveyors of this landscape to analyze the morphology and spatial distribution of depressions on the Big Cypress landscape. We took advantage of the emergence and availability of high resolution Light Direction and Ranging (LiDAR) technology and ArcMap GIS software to analyze the structure and regularity of landscape features with methods unavailable to past surveyors. Six 2.25 km2 LiDAR plots within the preserve were selected for remote analysis and one depression feature within each plot was selected for more intensive sediment and water depth surveying. Depression features on the Big Cypress landscape were found to show strong evidence of regular spatial patterning. Periodicity, a feature of regularly patterned landscapes, is apparent in both Variograms and Radial Spectrum Analyses. Size class distributions of the identified features indicate constrained feature sizes while Average Nearest Neighbor analyses support the inference of dispersed features with non-random spacing. The presence of regular patterning on this landscape strongly implies biotic reinforcement of spatial structure by way of the scale dependent feedback. In characterizing the structure of this wetland landscape we add to the growing body of work dedicated to documenting how water, life and geology may interact to shape the natural landscapes we see today.

  2. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  3. Statistical regularities in the rank-citation profile of scientists.

    Science.gov (United States)

    Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.

  4. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  5. Cellular telephone use among primary school children in Germany

    International Nuclear Information System (INIS)

    Boehler, Eva; Schuez, Joachim

    2004-01-01

    Background: There is some concern about potential health risks of cellular telephone use to children. We assessed data on how many children own a cellular telephone and on how often they use it in a population-based sample. Methods: We carried out a cross-sectional study among children in their fourth elementary school year, with a median-age of 10 years. The study was carried out in Mainz (Germany), a city with about 200,000 inhabitants. The study base comprised all 37 primary schools in Mainz and near surroundings. Altogether, 1933 children from 34 primary schools took part in the survey (participation rate of 87.8%). Results: Roughly a third of all children (n = 671, 34.7%) reported to own a cellular telephone. Overall, 119 (6.2%) children used a cellular telephone for making calls at least once a day, 123 (6.4%) used it several times a week and 876 (45.3%) children used it only once in a while. The remaining 805 (41.6%) children had never used a cellular telephone. The probability of owning a cellular telephone among children was associated with older age, being male, having no siblings, giving full particulars to height and weight, more time spent watching TV and playing computer games, being picked up by their parents from school by car (instead of walking or cycling) and going to bed late. The proportion of cellular telephone owners was somewhat higher in classes with more children from socially disadvantaged families. Conclusions: Our study shows that both ownership of a cellular telephone as well as the regular use of it are already quite frequent among children in the fourth grade of primary school. With regard to potential long-term effects, we recommend follow-up studies with children

  6. The patterning of retinal horizontal cells: normalizing the regularity index enhances the detection of genomic linkage

    Directory of Open Access Journals (Sweden)

    Patrick W. Keeley

    2014-10-01

    Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.

  7. On the regularity of the extinction probability of a branching process in varying and random environments

    International Nuclear Information System (INIS)

    Alili, Smail; Rugh, Hans Henrik

    2008-01-01

    We consider a supercritical branching process in time-dependent environment ξ. We assume that the offspring distributions depend regularly (C k or real-analytically) on real parameters λ. We show that the extinction probability q λ (ξ), given the environment ξ 'inherits' this regularity whenever the offspring distributions satisfy a condition of contraction-type. Our proof makes use of the Poincaré metric on the complex unit disc and a real-analytic implicit function theorem

  8. Selecting protein families for environmental features based on manifold regularization.

    Science.gov (United States)

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.

  9. Thermal expansion behavior in fabricated cellular structures

    International Nuclear Information System (INIS)

    Oruganti, R.K.; Ghosh, A.K.; Mazumder, J.

    2004-01-01

    Thermal expansion behavior of cellular structures is of interest in applications where undesirable deformation and failure are caused by thermal expansion mismatch. This report describes the role of processing-induced effects and metallurgical aspects of melt-processed cellular structures, such as a bi-material structure designed to contract on heating, as well as uni-material structures of regular and stochastic topology. This bi-material structure utilized the principle of internal geometric constraints to alter the expansion behavior of the internal ligaments to create overall contraction of the structure. Homogenization design method was used to design the structure, and fabrication was by direct metal deposition by laser melting of powder in another part of a joint effort. The degree of porosity and grain size in the fabricated structure are characterized and related to the laser deposition parameters. The structure was found to contract upon heating over a short range of temperature subsequent to which normal expansion ensued. Also examined in this report are uni-material cellular structures, in which internal constraints arise from residual stress variations caused by the fabrication process, and thereby alter their expansion characteristics. A simple analysis of thermal strain of this material supports the observed thermal expansion behavior

  10. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  11. High-resolution seismic data regularization and wavefield separation

    Science.gov (United States)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.

  12. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  13. Sleep and the price of plasticity: from synaptic and cellular homeostasis to memory consolidation and integration.

    Science.gov (United States)

    Tononi, Giulio; Cirelli, Chiara

    2014-01-08

    Sleep is universal, tightly regulated, and its loss impairs cognition. But why does the brain need to disconnect from the environment for hours every day? The synaptic homeostasis hypothesis (SHY) proposes that sleep is the price the brain pays for plasticity. During a waking episode, learning statistical regularities about the current environment requires strengthening connections throughout the brain. This increases cellular needs for energy and supplies, decreases signal-to-noise ratios, and saturates learning. During sleep, spontaneous activity renormalizes net synaptic strength and restores cellular homeostasis. Activity-dependent down-selection of synapses can also explain the benefits of sleep on memory acquisition, consolidation, and integration. This happens through the offline, comprehensive sampling of statistical regularities incorporated in neuronal circuits over a lifetime. This Perspective considers the rationale and evidence for SHY and points to open issues related to sleep and plasticity. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. High-strength cellular ceramic composites with 3D microarchitecture.

    Science.gov (United States)

    Bauer, Jens; Hengsbach, Stefan; Tesari, Iwiza; Schwaiger, Ruth; Kraft, Oliver

    2014-02-18

    To enhance the strength-to-weight ratio of a material, one may try to either improve the strength or lower the density, or both. The lightest solid materials have a density in the range of 1,000 kg/m(3); only cellular materials, such as technical foams, can reach considerably lower values. However, compared with corresponding bulk materials, their specific strength generally is significantly lower. Cellular topologies may be divided into bending- and stretching-dominated ones. Technical foams are structured randomly and behave in a bending-dominated way, which is less weight efficient, with respect to strength, than stretching-dominated behavior, such as in regular braced frameworks. Cancellous bone and other natural cellular solids have an optimized architecture. Their basic material is structured hierarchically and consists of nanometer-size elements, providing a benefit from size effects in the material strength. Designing cellular materials with a specific microarchitecture would allow one to exploit the structural advantages of stretching-dominated constructions as well as size-dependent strengthening effects. In this paper, we demonstrate that such materials may be fabricated. Applying 3D laser lithography, we produced and characterized micro-truss and -shell structures made from alumina-polymer composite. Size-dependent strengthening of alumina shells has been observed, particularly when applied with a characteristic thickness below 100 nm. The presented artificial cellular materials reach compressive strengths up to 280 MPa with densities well below 1,000 kg/m(3).

  16. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  17. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  18. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla

    2015-10-26

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  19. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2015-01-01

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  20. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  1. Regularities of the vertical distribution of uranium-molybdenum mineralization

    International Nuclear Information System (INIS)

    Konstantinov, V.M.; Kazantsev, V.V.; Protasov, V.N.

    1980-01-01

    The geological structure of one of ore fields of the uranium-molybdenum formation pertaining to the northern framing of a large volcano-tectonic depression is studied. The main uranium deposits are related to necks formed by neck facies of brown liparites. Three zones are singled out within the limits of the ore field. In the upper one there are small ore bodies with a low uranium content represented by phenolite-chlorite, pitchblende 3-coffinite 3-jordizite and calcinite-sulphide associations, in the middle one - the main ore bodies formed by pitchblende 1-chlorite, molybdenite 2 (jordizite)-pitchblende 2-hydromica, coffinite 2-pyrite associations; in the lower one-thin veinlets formed by coffinite-molybdenite 1-chlorite, brannerite-pyrite and pitchblende 1-chlorite associations. Dimensions of the ore deposits depend on the neck sizes: in small necks the middle zone and, rarely, the lower one are of the industrial interest; in the large ones - the upper middle and, probably, lower ones. The regularities found can be extended to other deposits of the uranium-molybdenum formation [ru

  2. Chronic cranial window with access port for repeated cellular manipulations, drug application, and electrophysiology

    Directory of Open Access Journals (Sweden)

    Christopher Joel Roome

    2014-11-01

    Full Text Available Chronic cranial windows have been instrumental in advancing optical studies in vivo, permitting long-term, high-resolution imaging in various brain regions. However, once a window is attached it is difficult to regain access to the brain under the window for cellular manipulations. Here we describe a simple device that combines long term in vivo optical imaging with direct brain access via glass or quartz pipettes and metal, glass, or quartz electrodes for cellular manipulations like dye or drug injections and electrophysiological stimulations or recordings while keeping the craniotomy sterile. Our device comprises a regular cranial window glass coverslip with a drilled access hole later sealed with biocompatible silicone. This chronic cranial window with access port is cheap, easy to manufacture, can be mounted just as the regular chronic cranial window, and is self-sealing after retraction of the pipette or electrode. We demonstrate that multiple injections can be performed through the silicone port by repetitively bolus loading calcium sensitive dye into mouse barrel cortex and recording spontaneous cellular activity over a period of weeks. As an example to the extent of its utility for electrophysiological recording, we describe how simple removal of the silicone seal can permit patch pipette access for whole-cell patch clamp recordings in vivo. During these chronic experiments we do not observe any infections under the window or impairment of animal health.

  3. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  4. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  5. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  6. The long Tramp from Cellular Pathology to Molecular Pathology

    Directory of Open Access Journals (Sweden)

    Hans Guski

    2017-05-01

    Derivatives: The observation of principal identity of biological meaningful elements can be agglutinated to a ‘general theory of live’ and its manifestation. All of the investigated elements posses the same regularities, which are altered, destroyed or newly built by external influences such as disease, physical and psychological forces. Not all magnification levels that display with these elements are of the same significance. Already Virchow suggested that ‘smaller elements (molecules might be responsible for changes that are visible ‘in larger elements’ (at cellular level.  The reflection on these ideas can be associated with the implementation of molecular techniques which has been developed in the 20th century and are still ongoing today. Perspectives: Thus, cellular and molecular pathology can be integrated under one umbrella. This umbrella will lead to newly man-formed structures, such as artificial DNA and gene components or functional chip implantations.

  7. Stress analysis of two-dimensional cellular materials with thick cell struts

    International Nuclear Information System (INIS)

    Lim, Do Hyung; Kim, Han Sung; Kim, Young Ho; Kim, Yoon Hyuk; Al-Hassani, S.T.S.

    2008-01-01

    Finite element analyses (FEA) were performed to thoroughly validate the collapse criteria of cellular materials presented in our previous companion paper. The maximum stress (von-Mises stress) on the cell strut surface and the plastic collapse stress were computed for two-dimensional (2D) cellular materials with thick cell struts. The results from the FEA were compared with those from theoretical criteria of authors. The FEA results were in good agreement with the theoretical results. The results indicate that when bending moment, axial and shear forces are considered, the maximum stress on the strut surface gives significantly different values in the tensile and compressive parts of the cell wall as well as in the two loading directions. Therefore, for the initial yielding of ductile cellular materials and the fracture of brittle cellular materials, in which the maximum stress on the strut surface is evaluated, it is necessary to consider not only the bending moment but also axial and shear forces. In addition, this study shows that for regular cellular materials with the identical strut geometry for all struts, the initial yielding and the plastic collapse under a biaxial state of stress occur not only in the inclined cell struts but also in the vertical struts. These FEA results support the theoretical conclusion of our previous companion paper that the anisotropic 2D cellular material has a truncated yield surface not only on the compressive quadrant but also on the tensile quadrant

  8. Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning.

    Science.gov (United States)

    Sohn, Insoo; Liu, Huaping; Ansari, Nirwan

    2015-01-01

    An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction.

  9. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  10. A formal power series expansion-regularization approach for Lévy stable distributions: the symmetric case with \\alpha =2/M (M positive integer)

    Science.gov (United States)

    Crisanto-Neto, J. C.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.

    2016-09-01

    In practice, the Lévy α-stable distribution is usually expressed in terms of the Fourier integral of its characteristic function. Indeed, known closed form expressions are relatively scarce given the huge parameters space: 0\\lt α ≤slant 2 ({{L\\'{e}vy}} {{index}}), -1≤slant β ≤slant 1 ({{skewness}}),σ \\gt 0 ({{scale}}), and -∞ \\lt μ \\lt ∞ ({{shift}}). Hence, systematic efforts have been made towards the development of proper methods for analytically solving the mentioned integral. As a further contribution in this direction, here we propose a new way to tackle the problem. We consider an approach in which one first solves the Fourier integral through a formal (thus not necessarily convergent) series representation. Then, one uses (if necessary) a pertinent sum-regularization procedure to the resulting divergent series, so as to obtain an exact formula for the distribution, which is amenable to direct numerical calculations. As a concrete study, we address the centered, symmetric, unshifted and unscaled distribution (β =0, μ =0, σ =1), with α ={α }M=2/M, M=1,2,3\\ldots . Conceivably, the present protocol could be applied to other sets of parameter values.

  11. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  12. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  13. An improved cellular automaton method to model multispecies biofilms.

    Science.gov (United States)

    Tang, Youneng; Valocchi, Albert J

    2013-10-01

    Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  15. Referent 3D tumor model at cellular level in radionuclide therapy

    International Nuclear Information System (INIS)

    Spaic, R.; Ilic, R.D.; Petrovic, B.J.

    2002-01-01

    Aim Conventional internal dosimetry has a lot of limitations because of tumor dose nonuniformity. The best approach for absorbed dose at cellular level for different tumors in radionuclide therapy calculation is Monte Carlo method. The purpose of this study is to introduce referent tumor 3D model at cellular level for Monte Carlo simulation study in radionuclide therapy. Material and Methods The moment when tumor is detectable and when same therapy can start is time period in which referent 3D tumor model at cellular level was defined. In accordance with tumor growth rate at that moment he was a sphere with same radius (10 000 μm). In that tumor there are cells or cluster of cells, which are randomly distributed spheres. Distribution of cells/cluster of cells can be calculated from histology data but it was assumed that this distribution is normal with the same mean value and standard deviation (100±50 mm). Second parameter, which was selected to define referent tumor, is volume density of cells (30%). In this referent tumor there are no necroses. Stroma is defined as space between spheres with same concentration of materials as in spheres. Results: Referent tumor defined on this way have about 2,2 10 5 cells or cluster of cells random distributed. Using this referent 3D tumor model and for same concentration of radionuclides (1:100) and energy of beta emitters (1000 keV) which are homogeneously distributed in labeled cells absorbed dose for all cells was calculated. Simulations are done using FOTELP Monte Carlo code, which is modified for this purposes. Results of absorbed dose in cells are given in numerical values (1D distribution) and as the images (2D or 3D distributions). Conclusion Geometrical module for Monte Carlo simulation study can be standardized by introducing referent 3D tumor model at cellular level. This referent 3D tumor model gives most realistic presentation of different tumors at the moment of their detectability. Referent 3D tumor model at

  16. The cellular basis of organ ageing

    NARCIS (Netherlands)

    Knook, D.L.

    1978-01-01

    Ageing is associated with declines in the functional capacities of several organs. General causes for the decline can be divided into: 1. intrinsic cellular causes and 2. extracellular causes, e.g., changes in blood circulation and distribution. For the first group of causes, there is evidence for a

  17. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  18. Postnatal odorant exposure induces peripheral olfactory plasticity at the cellular level

    OpenAIRE

    CADIOU , Hervé; AOUDE , Imad; Tazir , Bassim; Molinas , Adrien; Forbes Fenech , Claire; Meunier , Nicolas; Grosmaitre , Xavier

    2014-01-01

    Mammalian olfactory sensory neurons (OSNs) form the primary elements of the olfactory system. Inserted in the olfactory mucosa lining of the nasal cavity, they are exposed to the environment and their lifespan is brief. Several reports say that OSNs are regularly regenerated during the entire life and that odorant environment affects the olfactory epithelium. However, little is known about the impact of the odorant environment on OSNs at the cellular level and more precisely in the context of...

  19. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  20. In vitro cellular uptake of evodiamine and rutaecarpine using a microemulsion.

    Science.gov (United States)

    Zhang, Yong-Tai; Huang, Zhe-Bin; Zhang, Su-Juan; Zhao, Ji-Hui; Wang, Zhi; Liu, Ying; Feng, Nian-Ping

    2012-01-01

    To investigate the cellular uptake of evodiamine and rutaecarpine in a microemulsion in comparison with aqueous suspensions and tinctures. A microemulsion was prepared using the dropwise addition method. Mouse skin fibroblasts were cultured in vitro to investigate the optimal conditions for evodiamine and rutaecarpine uptake with different drug concentrations and administration times. Under optimal conditions, the cellular uptake of microemulsified drugs was assayed and compared to tinctures and aqueous suspensions. Rhodamine B labeling and laser scanning confocal microscopy (LSCM) were used to explore the distribution of fluorochrome transferred with the microemulsion in fibroblasts. Cellular morphology was also investigated, using optical microscopy to evaluate microemulsion-induced cellular toxicity. The maximum cellular drug uptake amounts were obtained with a 20% concentration (v/v) of microemulsion and an 8 hour administration time. Drug uptake by mouse skin fibroblasts was lowest when the drugs were loaded in microemulsion. After incubation with rhodamine B-labeled microemulsion for 8 hours, the highest fluorescence intensity was achieved, and the fluorochrome was primarily distributed in the cytochylema. No obvious cellular morphologic changes were observed with the administration of either the microemulsion or the aqueous suspension; for the tincture group, however, massive cellular necrocytosis was observed. The lower cellular uptake with microemulsion may be due to the fact that most of the drug loaded in the microemulsion vehicle was transported via the intercellular space, while a small quantity of free drug (released from the vehicle) was ingested through transmembrane transport. Mouse skin fibroblasts rarely endocytosed evodiamine and rutaecarpine with a microemulsion as the vehicle. The microemulsion had no obvious effect on cellular morphology, suggesting there is little or no cellular toxicity associated with the administration of microemulsion on

  1. On the number of spanning trees in random regular graphs

    DEFF Research Database (Denmark)

    Greenhill, Catherine; Kwan, Matthew; Wind, David Kofoed

    2014-01-01

    Let d >= 3 be a fixed integer. We give an asympotic formula for the expected number of spanning trees in a uniformly random d-regular graph with n vertices. (The asymptotics are as n -> infinity, restricted to even n if d is odd.) We also obtain the asymptotic distribution of the number of spanning...

  2. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  3. Iron Oxide Nanoparticles Stimulates Extra-Cellular Matrix Production in Cellular Spheroids

    Directory of Open Access Journals (Sweden)

    Megan Casco

    2017-01-01

    Full Text Available Nanotechnologies have been integrated into drug delivery, and non-invasive imaging applications, into nanostructured scaffolds for the manipulation of cells. The objective of this work was to determine how the physico-chemical properties of magnetic nanoparticles (MNPs and their spatial distribution into cellular spheroids stimulated cells to produce an extracellular matrix (ECM. The MNP concentration (0.03 mg/mL, 0.1 mg/mL and 0.3 mg/mL, type (magnetoferritin, shape (nanorod—85 nm × 425 nm and incorporation method were studied to determine each of their effects on the specific stimulation of four ECM proteins (collagen I, collagen IV, elastin and fibronectin in primary rat aortic smooth muscle cell. Results demonstrated that as MNP concentration increased there was up to a 6.32-fold increase in collagen production over no MNP samples. Semi-quantitative Immunohistochemistry (IHC results demonstrated that MNP type had the greatest influence on elastin production with a 56.28% positive area stain compared to controls and MNP shape favored elastin stimulation with a 50.19% positive area stain. Finally, there are no adverse effects of MNPs on cellular contractile ability. This study provides insight on the stimulation of ECM production in cells and tissues, which is important because it plays a critical role in regulating cellular functions.

  4. Dose domain regularization of MLC leaf patterns for highly complex IMRT plans

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Dan; Yu, Victoria Y.; Ruan, Dan; Cao, Minsong; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States); O’Connor, Daniel [Department of Mathematics, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-04-15

    Purpose: The advent of automated beam orientation and fluence optimization enables more complex intensity modulated radiation therapy (IMRT) planning using an increasing number of fields to exploit the expanded solution space. This has created a challenge in converting complex fluences to robust multileaf collimator (MLC) segments for delivery. A novel method to regularize the fluence map and simplify MLC segments is introduced to maximize delivery efficiency, accuracy, and plan quality. Methods: In this work, we implemented a novel approach to regularize optimized fluences in the dose domain. The treatment planning problem was formulated in an optimization framework to minimize the segmentation-induced dose distribution degradation subject to a total variation regularization to encourage piecewise smoothness in fluence maps. The optimization problem was solved using a first-order primal-dual algorithm known as the Chambolle-Pock algorithm. Plans for 2 GBM, 2 head and neck, and 2 lung patients were created using 20 automatically selected and optimized noncoplanar beams. The fluence was first regularized using Chambolle-Pock and then stratified into equal steps, and the MLC segments were calculated using a previously described level reducing method. Isolated apertures with sizes smaller than preset thresholds of 1–3 bixels, which are square units of an IMRT fluence map from MLC discretization, were removed from the MLC segments. Performance of the dose domain regularized (DDR) fluences was compared to direct stratification and direct MLC segmentation (DMS) of the fluences using level reduction without dose domain fluence regularization. Results: For all six cases, the DDR method increased the average planning target volume dose homogeneity (D95/D5) from 0.814 to 0.878 while maintaining equivalent dose to organs at risk (OARs). Regularized fluences were more robust to MLC sequencing, particularly to the stratification and small aperture removal. The maximum and

  5. The Effect of Regular Physical Education in the Transformation Motor Development of Children with Special Needs

    Directory of Open Access Journals (Sweden)

    Danilo Bojanić

    2016-02-01

    Full Text Available The aim of the research is to determine the level of quantitative changes of motor abilities of pupils with special needs under the influence of kinetic activity regular physical education teaching. The survey was conducted on students of the Centre for children and youth with special needs in Mostar, the city of Los Rosales in Mostar and day care facilities for children with special needs in Niksic. The sample was composed of boys of 46 subjects, who were involved in regular physical education for a period of one school year. The level of quantitative and qualitative changes in motor skills, written under the influence of kinesiology operators within regular school physical education classes, was estimated by applying appropriate tests of motor skills, selected in accordance with the degree of mental ability and biological age. Manifest variables applied in this experiment were processed using standard descriptive methods in order to determine their distribution function and basic function parameters. Comparisons of results of measures of central dispersion parameters initial and final measurement, it is evident that the applied program of physical education and sport contribute to changing the distribution of central and dispersion parameters, and that the same distribution of the final measurement closer to the normal distribution of results.

  6. Degradable gene delivery systems based on Pluronics-modified low-molecular-weight polyethylenimine: preparation, characterization, intracellular trafficking, and cellular distribution

    Directory of Open Access Journals (Sweden)

    Ding X

    2012-02-01

    homogeneous distribution in the cytoplasm; those with a lower hydrophilic-lipophilic balance value prefer to localize in the nucleus.Conclusion: This Pluronic-polyethyleneimine system may be worth exploring as components in the cationic copolymers as the DNA or small interfering RNA/microRNA delivery system in the near future.Keywords: Pluronics, gene transfer, nonviral vectors, transfection efficiency, cellular uptake

  7. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  8. Intractable problems in reversible cellular automata

    International Nuclear Information System (INIS)

    Vatan, F.

    1988-01-01

    The billiard ball model, a classical mechanical system in which all parameters are real variables, can perform all digital computations. An eight-state, 11-neighbor reversible cellular automaton (an entirely discrete system in which all parameters are integer variables) can simulate this model. One of the natural problems for this system is to determine the shape of a container so that they initial specific distribution of gas molecules eventually leads to a predetermined distribution. This problem if PSPACE-complete. Related intractable and decidable problems are discussed as well

  9. Stochastic fluctuations and distributed control of gene expression impact cellular memory.

    Directory of Open Access Journals (Sweden)

    Guillaume Corre

    Full Text Available Despite the stochastic noise that characterizes all cellular processes the cells are able to maintain and transmit to their daughter cells the stable level of gene expression. In order to better understand this phenomenon, we investigated the temporal dynamics of gene expression variation using a double reporter gene model. We compared cell clones with transgenes coding for highly stable mRNA and fluorescent proteins with clones expressing destabilized mRNA-s and proteins. Both types of clones displayed strong heterogeneity of reporter gene expression levels. However, cells expressing stable gene products produced daughter cells with similar level of reporter proteins, while in cell clones with short mRNA and protein half-lives the epigenetic memory of the gene expression level was completely suppressed. Computer simulations also confirmed the role of mRNA and protein stability in the conservation of constant gene expression levels over several cell generations. These data indicate that the conservation of a stable phenotype in a cellular lineage may largely depend on the slow turnover of mRNA-s and proteins.

  10. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  11. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  12. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  13. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  14. Context-Specific Metabolic Model Extraction Based on Regularized Least Squares Optimization.

    Directory of Open Access Journals (Sweden)

    Semidán Robaina Estévez

    Full Text Available Genome-scale metabolic models have proven highly valuable in investigating cell physiology. Recent advances include the development of methods to extract context-specific models capable of describing metabolism under more specific scenarios (e.g., cell types. Yet, none of the existing computational approaches allows for a fully automated model extraction and determination of a flux distribution independent of user-defined parameters. Here we present RegrEx, a fully automated approach that relies solely on context-specific data and ℓ1-norm regularization to extract a context-specific model and to provide a flux distribution that maximizes its correlation to data. Moreover, the publically available implementation of RegrEx was used to extract 11 context-specific human models using publicly available RNAseq expression profiles, Recon1 and also Recon2, the most recent human metabolic model. The comparison of the performance of RegrEx and its contending alternatives demonstrates that the proposed method extracts models for which both the structure, i.e., reactions included, and the flux distributions are in concordance with the employed data. These findings are supported by validation and comparison of method performance on additional data not used in context-specific model extraction. Therefore, our study sets the ground for applications of other regularization techniques in large-scale metabolic modeling.

  15. arXiv Describing dynamical fluctuations and genuine correlations by Weibull regularity

    CERN Document Server

    Nayak, Ranjit K.; Sarkisyan-Grinbaum, Edward K.; Tasevsky, Marek

    The Weibull parametrization of the multiplicity distribution is used to describe the multidimensional local fluctuations and genuine multiparticle correlations measured by OPAL in the large statistics $e^{+}e^{-} \\to Z^{0} \\to hadrons$ sample. The data are found to be well reproduced by the Weibull model up to higher orders. The Weibull predictions are compared to the predictions by the two other models, namely by the negative binomial and modified negative binomial distributions which mostly failed to fit the data. The Weibull regularity, which is found to reproduce the multiplicity distributions along with the genuine correlations, looks to be the optimal model to describe the multiparticle production process.

  16. Wave dynamics of regular and chaotic rays

    International Nuclear Information System (INIS)

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space

  17. Definition and evolution of quantum cellular automata with two qubits per cell

    International Nuclear Information System (INIS)

    Karafyllidis, Ioannis G.

    2004-01-01

    Studies of quantum computer implementations suggest cellular quantum computer architectures. These architectures can simulate the evolution of quantum cellular automata, which can possibly simulate both quantum and classical physical systems and processes. It is however known that except for the trivial case, unitary evolution of one-dimensional homogeneous quantum cellular automata with one qubit per cell is not possible. Quantum cellular automata that comprise two qubits per cell are defined and their evolution is studied using a quantum computer simulator. The evolution is unitary and its linearity manifests itself as a periodic structure in the probability distribution patterns

  18. Design and evaluation of cellular power converter architectures

    Science.gov (United States)

    Perreault, David John

    Power electronic technology plays an important role in many energy conversion and storage applications, including machine drives, power supplies, frequency changers and UPS systems. Increases in performance and reductions in cost have been achieved through the development of higher performance power semiconductor devices and integrated control devices with increased functionality. Manufacturing techniques, however, have changed little. High power is typically achieved by paralleling multiple die in a sing!e package, producing the physical equivalent of a single large device. Consequently, both the device package and the converter in which the device is used continue to require large, complex mechanical structures, and relatively sophisticated heat transfer systems. An alternative to this approach is the use of a cellular power converter architecture, which is based upon the parallel connection of a large number of quasi-autonomous converters, called cells, each of which is designed for a fraction of the system rating. The cell rating is chosen such that single-die devices in inexpensive packages can be used, and the cell fabricated with an automated assembly process. The use of quasi-autonomous cells means that system performance is not compromised by the failure of a cell. This thesis explores the design of cellular converter architectures with the objective of achieving improvements in performance, reliability, and cost over conventional converter designs. New approaches are developed and experimentally verified for highly distributed control of cellular converters, including methods for ripple cancellation and current-sharing control. The performance of these techniques are quantified, and their dynamics are analyzed. Cell topologies suitable to the cellular architecture are investigated, and their use for systems in the 5-500 kVA range is explored. The design, construction, and experimental evaluation of a 6 kW cellular switched-mode rectifier is also addressed

  19. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  20. Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.

    Science.gov (United States)

    Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A

    2013-02-01

    The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®

  1. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  2. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  3. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  4. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  5. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  6. On the possible dynamical realization of the Pauli–Villars regularization

    Energy Technology Data Exchange (ETDEWEB)

    Kirillov, A. A.; Savelova, E. P., E-mail: ka98@mail.ru [Society, and Man, Dubna International University for Nature (Russian Federation)

    2015-12-15

    The problem of free-particle scattering on virtual wormholes is considered. It is shown that, for all types of relativistic fields, this scattering leads to the appearance of additional very heavy particles, which play the role of auxiliary fields in the invariant scheme of Pauli–Villars regularization. A nonlinear correction that describes the back reaction of particles to the vacuum distribution of virtual wormholes is also obtained.

  7. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  8. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  9. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  10. Regularities of radiorace formation in yeasts

    International Nuclear Information System (INIS)

    Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)

    1977-01-01

    Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes

  11. Additive Cellular Automata and Volume Growth

    Directory of Open Access Journals (Sweden)

    Thomas B. Ward

    2000-08-01

    Full Text Available Abstract: A class of dynamical systems associated to rings of S-integers in rational function fields is described. General results about these systems give a rather complete description of the well-known dynamics in one-dimensional additive cellular automata with prime alphabet, including simple formulæ for the topological entropy and the number of periodic configurations. For these systems the periodic points are uniformly distributed along some subsequence with respect to the maximal measure, and in particular are dense. Periodic points may be constructed arbitrarily close to a given configuration, and rationality of the dynamical zeta function is characterized. Throughout the emphasis is to place this particular family of cellular automata into the wider context of S-integer dynamical systems, and to show how the arithmetic of rational function fields determines their behaviour. Using a covering space the dynamics of additive cellular automata are related to a form of hyperbolicity in completions of rational function fields. This expresses the topological entropy of the automata directly in terms of volume growth in the covering space.

  12. Inter-cellular transport of ran GTPase.

    Directory of Open Access Journals (Sweden)

    Deepak Khuperkar

    Full Text Available Ran, a member of the Ras-GTPase superfamily, has a well-established role in regulating the transport of macromolecules across the nuclear envelope (NE. Ran has also been implicated in mitosis, cell cycle progression, and NE formation. Over-expression of Ran is associated with various cancers, although the molecular mechanism underlying this phenomenon is unclear. Serendipitously, we found that Ran possesses the ability to move from cell-to-cell when transiently expressed in mammalian cells. Moreover, we show that the inter-cellular transport of Ran is GTP-dependent. Importantly, Ran displays a similar distribution pattern in the recipient cells as that in the donor cell and co-localizes with the Ran binding protein Nup358 (also called RanBP2. Interestingly, leptomycin B, an inhibitor of CRM1-mediated export, or siRNA mediated depletion of CRM1, significantly impaired the inter-cellular transport of Ran, suggesting a function for CRM1 in this process. These novel findings indicate a possible role for Ran beyond nucleo-cytoplasmic transport, with potential implications in inter-cellular communication and cancers.

  13. Cellular autofluorescence imaging for early diagnosis of cancers

    Science.gov (United States)

    Steenkeste, Karine; Deniset, Ariane; Lecart, Sandrine; Leveque-Fort, Sandrine; Fontaine-Aupart, Marie-Pierre; Ferlicot, Sophie; Eschwege, Pascal

    2005-08-01

    Urinary cytology is employed in diagnostic guidelines of bladder cancer in anatomo-pathological laboratories mostly for its ability to diagnose non detectable cancers using cystoscopy, but also because it is a non-invasive and non-constraining technique for a regular follow-up of the more exposed populations. The impossibility to detect such cancers is mainly due to their localization either in the bladder or in the upper urinary tract and the prostate. However, urinary cytology lacks sensitivity, especially for the detection of low grade low stage tumors due to inherent limitation of morphological criteria to distinguish low grade tumor cells from normal urothelial cells. For this purpose, we developed, in addition to urinary cytology, an original screening of these cytological slides by using spectrally-resolved and time-resolved fluorescence as a contrast factor, without changing any parameters in the cytological slide preparation. This method takes advantage of a femtosecond Ti:sapphire laser, continuously tunable in the spectral range 700-950 nm allowing the observation of most endogenous cellular chromophores by biphotonic excitation. A commercial confocal microscope was also used in the measurements allowing an excitation of the samples between 458 nm and 633 nm. We observed that the fluorescence emission is differentially distributed in normal and pathological urothelial cells. Spectral- and time-resolved measurements attested this difference over about one hundred cases which have been tested to confirm the high accuracy of this non-invasive technique.

  14. Effect of temperature gradient and crystallization rate on morphological peculiarities of cellular-dendrite structure in iron-nickel alloys

    International Nuclear Information System (INIS)

    Kralina, A.A.; Vorontsov, V.B.

    1977-01-01

    Cellular and dendritic structure of Fe-Ni single crystals (31 and 45 wt%Ni) grown according to Bridgeman have been studied by metallography. Growth rates at which the crystallization frontier becomes unstable and splits into cells have been determined for three temperature gradients. The transition from cells to dendrites occurs gradually through the changes in the cells regular structure and formation of secondary and tertiary branches. The dependence of cell diameter and distance between dendrites on crystallization rate and temperature gradient are discussed in terms of the admixture substructures development according to the schedule: cells - cellular dendrites - dendrites

  15. Probabilistic cellular automata: Some statistical mechanical considerations

    International Nuclear Information System (INIS)

    Lebowitz, J.L.; Maes, C.; Speer, E.R.

    1990-01-01

    Spin systems evolving in continuous or discrete time under the action of stochastic dynamics are used to model phenomena as diverse as the structure of alloys and the functioning of neural networks. While in some cases the dynamics are secondary, designed to produce a specific stationary measure whose properties one is interested in studying, there are other cases in which the only available information is the dynamical rule. Prime examples of the former are computer simulations, via Glauber dynamics, of equilibrium Gibbs measures with a specified interaction potential. Examples of the latter include various types of majority rule dynamics used as models for pattern recognition and for error-tolerant computations. The present note discusses ways in which techniques found useful in equilibrium statistical mechanics can be applied to a particular class of models of the latter types. These are cellular automata with noise: systems in which the spins are updated stochastically at integer times, simultaneously at all sites of some regular lattice. These models were first investigated in detail in the Soviet literature of the late sixties and early seventies. They are now generally referred to as Stochastic or Probabilistic Cellular Automata (PCA), and may be considered to include deterministic automata (CA) as special limits. 16 refs., 3 figs

  16. Characteristic Analysis of Mixed Traffic Flow of Regular and Autonomous Vehicles Using Cellular Automata

    Directory of Open Access Journals (Sweden)

    Yangzexi Liu

    2017-01-01

    Full Text Available The technology of autonomous vehicles is expected to revolutionize the operation of road transport systems. The penetration rate of autonomous vehicles will be low at the early stage of their deployment. It is a challenge to explore the effects of autonomous vehicles and their penetration on heterogeneous traffic flow dynamics. This paper aims to investigate this issue. An improved cellular automaton was employed as the modeling platform for our study. In particular, two sets of rules for lane changing were designed to address mild and aggressive lane changing behavior. With extensive simulation studies, we obtained some promising results. First, the introduction of autonomous vehicles to road traffic could considerably improve traffic flow, particularly the road capacity and free-flow speed. And the level of improvement increases with the penetration rate. Second, the lane-changing frequency between neighboring lanes evolves with traffic density along a fundamental-diagram-like curve. Third, the impacts of autonomous vehicles on the collective traffic flow characteristics are mainly related to their smart maneuvers in lane changing and car following, and it seems that the car-following impact is more pronounced.

  17. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  18. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  19. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  20. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  1. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    Science.gov (United States)

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  2. Effects of nicotine on cellular proliferation, cell cycle phase distribution, and macromolecular synthesis in human promyelocytic HL-60 leukaemia cells

    International Nuclear Information System (INIS)

    Konno, S.; Wu, J.M.; Chiao, J.W.

    1986-01-01

    Addition of nicotine causes a dose- and time-dependent inhibition of cell growth in the human promyelocytic HL-60 leukemia cells, with 4 mM nicotine resulting in a 50% inhibition of cellular proliferation after 48-50h. Accompanying the anticellular effect of nicotine is a significant change in the cell cycle distribution of HL-60 cells. For example, treatment with 4 mM nicotine for 20h causes an increase in the proportion of G1-phase cells (from 49% to 57%) and a significant decrease in the proportion of S-phase cells (from 41% to 32%). These results suggest that nicotine causes partial cell arrest in the G-1 phase which may in part account for its effects on cell growth. To determine whether nicotine changes the cellular uptake/transport to macromolecular precursors, HL-60 cells were treated with 216 mM nicotine for 30h, at the end of which time cells were labelled with ( 3 H)thymidine, ( 3 H)uridine, ( 14 C)lysine and( 35 S)methionine, the trichloroacetic acid soluble and insoluble radioactivities from each of the labelling conditions were determined. These studies show that nicotine mainly affects the ''de novo synthesis'' of proteins. (author)

  3. Temporal organization of cellular self-replication

    Science.gov (United States)

    Alexandrov, Victor; Pugatch, Rami

    Recent experiments demonstrate that single cells grow exponentially in time. A coarse grained model of cellular self-replication is presented based on a novel concept - the cell is viewed as a self-replicating queue. This allows to have a more fundamental look into various temporal organizations and, importantly, the inherent non-Markovianity of noise distributions. As an example, the distribution of doubling times can be inferred and compared to single cell experiments in bacteria. We observe data collapse upon scaling by the average doubling time for different environments and present an inherent task allocation trade-off. Support from the Simons Center for Systems Biology, IAS, Princeon.

  4. Seasonal variations in hepatic Cd and Cu concentrations and in the sub-cellular distribution of these metals in juvenile yellow perch (Perca flavescens)

    International Nuclear Information System (INIS)

    Kraemer, Lisa D.; Campbell, Peter G.C.; Hare, Landis

    2006-01-01

    perch with high hepatic metal concentrations. - In fish from metal-contaminated sites, seasonal variations in hepatic Cd and Cu concentrations were greater than in fish from reference sites, and homeostatic control of sub-cellular metal distribution was compromised

  5. Studies on factors of disorder and regularity in the streeet view. Part 1. ; Studies on disorder and regularity in the central business district. Gairo keikan no ranzatsuter dot seizensei yoin ni kansuru kenkyu. 1. ; Chushin shigaichi ni okeru ranzatsuter dot seizensei ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Matsunoto, N; Teranishi, N; Senda, M [Nagoya Institute of Technology, Nagoya (Japan). Faculty of Engineering

    1991-11-30

    This study intends to identify the visual confusion in spatial views by the routine human conveption of regularity and disorder, and elucidate physical factors that cause the disorder and regularity in street views. The disorder factors include the additions annexed to the view afterwards, and the regularity factors include those flat objects that decide the pattern of a view. Many factors can be either disorder or reguarity factors according to their sizes, how they look, and how they are affected by surrounding objects. The disorder in a street view is approximately determined of its degree by such distribution patterns as the spread and convergence of the disorder factors. The disorder can be intensified stronger by the spread of the factors widely over an entire area or the existence of objects that give disorderly impressions, rather than by the number of disorder factors. The regularity is affected easily by the disorder factors, it being lowered by intensities of the disorder. The deciding factors for the disorder or regularity are the kinds of objects, how they are laid out and what they are surrounded with. The factors that govern the degrees of the disorder and regularity are how large the area they are distributed in, the number, how they look, and where they are positioned. 24 refs., 9 figs., 6 tabs.

  6. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  7. Production, properties, and applications of hydrocolloid cellular solids.

    Science.gov (United States)

    Nussinovitch, Amos

    2005-02-01

    Many common synthetic and edible materials are, in fact, cellular solids. When classifying the structure of cellular solids, a few variables, such as open vs. closed cells, flexible vs. brittle cell walls, cell-size distribution, cell-wall thickness, cell shape, the uniformity of the structure of the cellular solid and the different scales of length are taken into account. Compressive stress-strain relationships of most cellular solids can be easily identified according to their characteristic sigmoid shape, reflecting three deformation mechanisms: (i) elastic distortion under small strains, (ii) collapse and/or fracture of the cell walls, and (iii) densification. Various techniques are used to produce hydrocolloid (gum) cellular solids. The products of these include (i) sponges, obtained when the drying gel contains the occasionally produced gas bubbles; (ii) sponges produced by the immobilization of microorganisms; (iii) solid foams produced by drying foamed solutions or gels containing oils, and (iv) hydrocolloid sponges produced by enzymatic reactions. The porosity of the manufactured cellular solid is subject to change and depends on its composition and the processing technique. The porosity is controlled by a range of methods and the resulting surface structures can be investigated by microscopy and analyzed using fractal methods. Models used to describe stress-strain behaviors of hydrocolloid cellular solids as well as multilayered products and composites are discussed in detail in this manuscript. Hydrocolloid cellular solids have numerous purposes, simple and complex, ranging from dried texturized fruits to carriers of vitamins and other essential micronutrients. They can also be used to control the acoustic response of specific dry food products, and have a great potential for future use in countless different fields, from novel foods and packaging to medicine and medical care, daily commodities, farming and agriculture, and the environmental, chemical

  8. A cellular automata model of bone formation.

    Science.gov (United States)

    Van Scoy, Gabrielle K; George, Estee L; Opoku Asantewaa, Flora; Kerns, Lucy; Saunders, Marnie M; Prieto-Langarica, Alicia

    2017-04-01

    Bone remodeling is an elegantly orchestrated process by which osteocytes, osteoblasts and osteoclasts function as a syncytium to maintain or modify bone. On the microscopic level, bone consists of cells that create, destroy and monitor the bone matrix. These cells interact in a coordinated manner to maintain a tightly regulated homeostasis. It is this regulation that is responsible for the observed increase in bone gain in the dominant arm of a tennis player and the observed increase in bone loss associated with spaceflight and osteoporosis. The manner in which these cells interact to bring about a change in bone quality and quantity has yet to be fully elucidated. But efforts to understand the multicellular complexity can ultimately lead to eradication of metabolic bone diseases such as osteoporosis and improved implant longevity. Experimentally validated mathematical models that simulate functional activity and offer eventual predictive capabilities offer tremendous potential in understanding multicellular bone remodeling. Here we undertake the initial challenge to develop a mathematical model of bone formation validated with in vitro data obtained from osteoblastic bone cells induced to mineralize and quantified at 26 days of culture. A cellular automata model was constructed to simulate the in vitro characterization. Permutation tests were performed to compare the distribution of the mineralization in the cultures and the distribution of the mineralization in the mathematical models. The results of the permutation test show the distribution of mineralization from the characterization and mathematical model come from the same probability distribution, therefore validating the cellular automata model. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Salt-body Inversion with Minimum Gradient Support and Sobolev Space Norm Regularizations

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Full-waveform inversion (FWI) is a technique which solves the ill-posed seismic inversion problem of fitting our model data to the measured ones from the field. FWI is capable of providing high-resolution estimates of the model, and of handling wave propagation of arbitrary complexity (visco-elastic, anisotropic); yet, it often fails to retrieve high-contrast geological structures, such as salt. One of the reasons for the FWI failure is that the updates at earlier iterations are too smooth to capture the sharp edges of the salt boundary. We compare several regularization approaches, which promote sharpness of the edges. Minimum gradient support (MGS) regularization focuses the inversion on blocky models, even more than the total variation (TV) does. However, both approaches try to invert undesirable high wavenumbers in the model too early for a model of complex structure. Therefore, we apply the Sobolev space norm as a regularizing term in order to maintain a balance between sharp and smooth updates in FWI. We demonstrate the application of these regularizations on a Marmousi model, enriched by a chunk of salt. The model turns out to be too complex in some parts to retrieve its full velocity distribution, yet the salt shape and contrast are retrieved.

  10. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  11. Co(III)EDTA as extra-cellular marker in μPIXE-analysis of rat cardiomyocytes

    International Nuclear Information System (INIS)

    Quaedackers, J.A.; Queens, R.M.G.J.; Mutsaers, P.H.A.; Voigt, M.J.A. de; Vusse, G.J. van der

    1998-01-01

    In previous studies no clear difference was found between the intra- and extra-cellular compartment in nuclear microprobe elemental distribution maps of freeze-dried cryo sections of heart tissue. Probably due to artefacts during the preparation of these samples, the intra-cellular and the extra-cellular content of elements are mixed up. In this article a method, using NaCo(III)EDTA as an extra-cellular marker, was applied to deconvolute the total ion content in an extra- and intra-cellular contribution. This method was both applied to normoxic heart tissue and low-flow ischemic heart tissue. Intra-cellular ion concentrations calculated from the corrected ion contents of the normoxic tissue agrees well with literature values. Moreover a clear elevation of the intra-cellular sodium and chlorine concentration was found in low-flow ischemic tissue. (orig.)

  12. Experimental design for dynamics identification of cellular processes.

    Science.gov (United States)

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  13. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  14. Frequent cellular phone use modifies hypothalamic-pituitary-adrenal axis response to a cellular phone call after mental stress in healthy children and adolescents: A pilot study.

    Science.gov (United States)

    Geronikolou, Styliani A; Chamakou, Aikaterini; Mantzou, Aimilia; Chrousos, George; KanakaGantenbein, Christina

    2015-12-01

    The hypothalamic-pituitary-adrenal (HPA) axis is the main "gate-keeper" of the organism's response to every somatic or mental stress. This prospective study aims to investigate the HPA-axis response to a cellular phone call exposure after mental stress in healthy children and adolescents and to assess the possible predictive role of baseline endocrine markers to this response. Two groups of healthy school-age children aged 11-14 (12.5±1.5) years were included in the study, the one comprising those who are occasional users of a cellular phone (Group A) while the second those who do regularly use one (Group B). Blood samples were obtained from all participants at 8.00 am after a 12-hour overnight fasting for thyroid hormone, glucose, insulin, and cortisol levels determination. The participants performed the Trier Social Stress Test for Children (TSST-C) (5 minoral task followed by 5 min arithmetic task). Salivary cortisol samples were obtained at baseline, 10' and 20' min after the TSST-C and 10' and 20' after a 5 minute cellular phone call. Significant changes in the salivary cortisol levels were noted between 10' and 20' mins after the cellular phone call with different responses between the two groups. Baseline thyroid hormone levels seem to predict the cortisol response to mental stress mainly in group A, while HOMA had no impact on salivary cortisol response at any phase of the test, in either group. HPA axis response to cellular phone after mental stress in children and adolescents follow a different pattern in frequent users than in occasional users that seems to be influenced by the baseline thyroid hormone levels. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Supporting Regularized Logistic Regression Privately and Efficiently

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  16. Supporting Regularized Logistic Regression Privately and Efficiently.

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  17. Supporting Regularized Logistic Regression Privately and Efficiently.

    Directory of Open Access Journals (Sweden)

    Wenfa Li

    Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  18. Quantification of fetal heart rate regularity using symbolic dynamics

    Science.gov (United States)

    van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.

    2007-03-01

    Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to

  19. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  20. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  1. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  2. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  3. Three-dimensional analysis of cellular microstructures by computer simulation

    International Nuclear Information System (INIS)

    Hanson, K.; Morris, J.W. Jr.

    1977-06-01

    For microstructures of the ''cellular'' type (isotropic growth from a distribution of nuclei which form simultaneously), it is possible to construct an efficient code which will completely analyze the microstructure in three dimensions. Such a computer code for creating and storing the connected graph was constructed

  4. Results on three predictions for July 2012 federal elections in Mexico based on past regularities.

    Directory of Open Access Journals (Sweden)

    H Hernández-Saldaña

    Full Text Available The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.

  5. Results on three predictions for July 2012 federal elections in Mexico based on past regularities.

    Science.gov (United States)

    Hernández-Saldaña, H

    2013-01-01

    The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.

  6. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1998-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  7. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J. [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1997-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  8. Regularities development of entrepreneurial structures in regions

    Directory of Open Access Journals (Sweden)

    Julia Semenovna Pinkovetskaya

    2012-12-01

    Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.

  9. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  10. Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    The χ 2 principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a χ 2 -distribution with m+p−n degrees of freedom for the model matrix G of size m×n, m⩾n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m 2 algorithms when m 2 and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the χ 2 principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general by the other three techniques. Furthermore, the minimum support stabilizer is of general use for the χ 2 principle when implemented without the desirable knowledge of the mean value of the model. (paper)

  11. SPET reconstruction with a non-uniform attenuation coefficient using an analytical regularizing iterative method

    International Nuclear Information System (INIS)

    Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn

    1982-01-01

    The potential of the Regularizing Iterative Method (RIM), when used in brain studies, is evaluated. RIM is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with Filtered Back Projection (FBP) technique. Preliminary results obtained in brain studies using isopropil-amphetamine I-123 (AMPI-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated, in comparing quantitative data in heart or liver studies where control values can be obtained

  12. Simulation of electrochemical processes in cardiac tissue based on cellular automaton

    International Nuclear Information System (INIS)

    Avdeev, S A; Bogatov, N M

    2014-01-01

    A new class of cellular automata using special accumulative function for nonuniformity distribution is presented. Usage of this automata type for simulation of excitable media applied to electrochemical processes in human cardiac tissue is shown

  13. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  14. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  15. Phase-Type Models of Channel-Holding Times in Cellular Communication Systems

    DEFF Research Database (Denmark)

    Christensen, Thomas Kaare; Nielsen, Bo Friis; Iversen, Villy Bæk

    2004-01-01

    In this paper, we derive the distribution of the channel-holding time when both cell-residence and call-holding times are phase-type distributed. Furthermore, the distribution of the number of handovers, the conditional channel-holding time distributions, and the channel-holding time when cell re...... residence times are correlated are derived. All distributions are of phase type, making them very general and flexible. The channel-holding times are of importance in performance evaluation and simulation of cellular mobile communication systems.......In this paper, we derive the distribution of the channel-holding time when both cell-residence and call-holding times are phase-type distributed. Furthermore, the distribution of the number of handovers, the conditional channel-holding time distributions, and the channel-holding time when cell...

  16. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  17. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  18. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  19. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  20. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  1. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  2. Mechanical behavior of regular open-cell porous biomaterials made of diamond lattice unit cells.

    Science.gov (United States)

    Ahmadi, S M; Campoli, G; Amin Yavari, S; Sajadi, B; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2014-06-01

    Cellular structures with highly controlled micro-architectures are promising materials for orthopedic applications that require bone-substituting biomaterials or implants. The availability of additive manufacturing techniques has enabled manufacturing of biomaterials made of one or multiple types of unit cells. The diamond lattice unit cell is one of the relatively new types of unit cells that are used in manufacturing of regular porous biomaterials. As opposed to many other types of unit cells, there is currently no analytical solution that could be used for prediction of the mechanical properties of cellular structures made of the diamond lattice unit cells. In this paper, we present new analytical solutions and closed-form relationships for predicting the elastic modulus, Poisson׳s ratio, critical buckling load, and yield (plateau) stress of cellular structures made of the diamond lattice unit cell. The mechanical properties predicted using the analytical solutions are compared with those obtained using finite element models. A number of solid and porous titanium (Ti6Al4V) specimens were manufactured using selective laser melting. A series of experiments were then performed to determine the mechanical properties of the matrix material and cellular structures. The experimentally measured mechanical properties were compared with those obtained using analytical solutions and finite element (FE) models. It has been shown that, for small apparent density values, the mechanical properties obtained using analytical and numerical solutions are in agreement with each other and with experimental observations. The properties estimated using an analytical solution based on the Euler-Bernoulli theory markedly deviated from experimental results for large apparent density values. The mechanical properties estimated using FE models and another analytical solution based on the Timoshenko beam theory better matched the experimental observations. Copyright © 2014 Elsevier Ltd

  3. Intelligent control system Cellular Robotics Approach to Nuclear Plant control and maintenance

    International Nuclear Information System (INIS)

    Fukuda, Toshio; Sekiyama, Kousuke; Xue Guoqing; Ueyama, Tsuyoshi.

    1994-01-01

    This paper presents the concept of Cellular Robotic System (CEBOT) and describe the strategy of a distributed sensing, control and planning as a Cellular Robotics Approach to the Nuclear Plant control and maintenance. Decentralized System is effective in large plant and The CEBOT possesses desirable features for realization of Nuclear Plant control and maintenance because of its flexibility and adaptability. Also, as related on going research work, self-organizing manipulator and communication issues are mentioned. (author)

  4. Computing aggregate properties of preimages for 2D cellular automata.

    Science.gov (United States)

    Beer, Randall D

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  5. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  6. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  7. A self-adapting and altitude-dependent regularization method for atmospheric profile retrievals

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2009-03-01

    Full Text Available MIPAS is a Fourier transform spectrometer, operating onboard of the ENVISAT satellite since July 2002. The online retrieval algorithm produces geolocated profiles of temperature and of volume mixing ratios of six key atmospheric constituents: H2O, O3, HNO3, CH4, N2O and NO2. In the validation phase, oscillations beyond the error bars were observed in several profiles, particularly in CH4 and N2O.

    To tackle this problem, a Tikhonov regularization scheme has been implemented in the retrieval algorithm. The applied regularization is however rather weak in order to preserve the vertical resolution of the profiles.

    In this paper we present a self-adapting and altitude-dependent regularization approach that detects whether the analyzed observations contain information about small-scale profile features, and determines the strength of the regularization accordingly. The objective of the method is to smooth out artificial oscillations as much as possible, while preserving the fine detail features of the profile when related information is detected in the observations.

    The proposed method is checked for self consistency, its performance is tested on MIPAS observations and compared with that of some other regularization schemes available in the literature. In all the considered cases the proposed scheme achieves a good performance, thanks to its altitude dependence and to the constraints employed, which are specific of the inversion problem under consideration. The proposed method is generally applicable to iterative Gauss-Newton algorithms for the retrieval of vertical distribution profiles from atmospheric remote sounding measurements.

  8. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  9. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  10. A Markov Process Inspired Cellular Automata Model of Road Traffic

    OpenAIRE

    Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng

    2008-01-01

    To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...

  11. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  12. Entanglement in coined quantum walks on regular graphs

    International Nuclear Information System (INIS)

    Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L

    2005-01-01

    Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial

  13. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  14. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  15. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  16. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  17. 5G and Cellular Networks in the Smart Grid

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Jorguseski, Ljupco; Zhang, Haibin

    2018-01-01

    grid. In the present chapter, we present the main features of both the non-3GPP technologies, IEEE 802.11ah, SigFox and LoRa, and the main features of past, current and future 3GPP technologies, namely releases High rate), 12-14 (IoT extensions) and 15-16 (5G). Additionally, we present......Wireless cellular networks will help Distribution System Operators (DSOs) to achieve observability below the substation level, which is needed to ensure stable operation in the smart grid. Both existing and upcoming cellular technologies are considered as candidates for helping to enable the smart...... the challenges and possible solutions for ensuring end-to-end security in smart grid systems....

  18. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  19. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  20. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  1. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  2. Coupled pulsating and cellular structure in the propagation of globally planar detonations in free space

    International Nuclear Information System (INIS)

    Han, Wenhu; Gao, Yang; Wang, Cheng; Law, Chung K.

    2015-01-01

    The globally planar detonation in free space is numerically simulated, with particular interest to understand and quantify the emergence and evolution of the one-dimensional pulsating instability and the two-dimensional cellular structure which is inherently also affected by pulsating instability. It is found that the pulsation includes three stages: rapid decay of the overdrive, approach to the Chapman-Jouguet state and emergence of weak pulsations, and the formation of strong pulsations; while evolution of the cellular structure also exhibits distinct behavior at these three stages: no cell formation, formation of small-scale, irregular cells, and formation of regular cells of a larger scale. Furthermore, the average shock pressure in the detonation front consists of fine-scale oscillations reflecting the collision dynamics of the triple-shock structure and large-scale oscillations affected by the global pulsation. The common stages of evolution between the cellular structure and the pulsating behavior, as well as the existence of shock-front pressure oscillation, suggest highly correlated mechanisms between them. Detonations with period doubling, period quadrupling, and chaotic amplitudes were also observed and studied for progressively increasing activation energies

  3. Cationic liposome/DNA complexes: from structure to interactions with cellular membranes.

    Science.gov (United States)

    Caracciolo, Giulio; Amenitsch, Heinz

    2012-10-01

    Gene-based therapeutic approaches are based upon the concept that, if a disease is caused by a mutation in a gene, then adding back the wild-type gene should restore regular function and attenuate the disease phenotype. To deliver the gene of interest, both viral and nonviral vectors are used. Viruses are efficient, but their application is impeded by detrimental side-effects. Among nonviral vectors, cationic liposomes are the most promising candidates for gene delivery. They form stable complexes with polyanionic DNA (lipoplexes). Despite several advantages over viral vectors, the transfection efficiency (TE) of lipoplexes is too low compared with those of engineered viral vectors. This is due to lack of knowledge about the interactions between complexes and cellular components. Rational design of efficient lipoplexes therefore requires deeper comprehension of the interactions between the vector and the DNA as well as the cellular pathways and mechanisms involved. The importance of the lipoplex structure in biological function is revealed in the application of synchrotron small-angle X-ray scattering in combination with functional TE measurements. According to current understanding, the structure of lipoplexes can change upon interaction with cellular membranes and such changes affect the delivery efficiency. Recently, a correlation between the mechanism of gene release from complexes, the structure, and the physical and chemical parameters of the complexes has been established. Studies aimed at correlating structure and activity of lipoplexes are reviewed herein. This is a fundamental step towards rational design of highly efficient lipid gene vectors.

  4. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  5. Cellular gravity

    NARCIS (Netherlands)

    F.C. Gruau; J.T. Tromp (John)

    1999-01-01

    textabstractWe consider the problem of establishing gravity in cellular automata. In particular, when cellular automata states can be partitioned into empty, particle, and wall types, with the latter enclosing rectangular areas, we desire rules that will make the particles fall down and pile up on

  6. Spatial distribution

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger

    2008-01-01

    , depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly......Living organisms are distributed over the entire surface of the planet. The distribution of the individuals of each species is not random; on the contrary, they are strongly dependent on the biology and ecology of the species, and vary over different spatial scale. The structure of whole...... populations reflects the location and fragmentation pattern of the habitat types preferred by the species, and the complex dynamics of migration, colonization, and population growth taking place over the landscape. Within these, individuals are distributed among each other in regular or clumped patterns...

  7. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  8. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  9. The biocompatibility of fluorescent nanodiamonds and their mechanism of cellular uptake

    International Nuclear Information System (INIS)

    Vaijayanthimala, Vairakkannu; Tzeng, Yan-Kai; Chang, Huan-Cheng; Li, Chung-Leung

    2009-01-01

    The labeling of cells with fluorescent nanoparticles is promising for various biomedical applications. The objective of this study is to evaluate the biocompatibility and the mechanism of the cellular uptake of fluorescent nanodiamonds (FNDs) in cancer cells (HeLa) and pre-adipocytes (3T3-L1). With flow cytometry and the use of a battery of metabolic and cytoskeletal inhibitors, we found that the mechanism of the FND uptake in both cells is by energy-dependent clathrin-mediated endocytosis. In addition, the surface charge of FND influences its cellular uptake, as the uptake of poly-L-lysine-coated FNDs is better than that of oxidative-acid-purified FNDs at the same concentration in regular medium with or without serum. We also confirm that the proliferative potential of FND-treated and untreated cells does not exhibit any significant differences when measured at bulk cultures, and more stringently at clonal cell density. Further biocompatibility studies indicate that the in vitro differentiation of 3T3-L1 pre-adipocytes and 489-2 osteoprogenitors is not affected by the FND treatment. Our results show that FNDs are biocompatible and ideal candidates for potential applications in human stem cell research.

  10. The biocompatibility of fluorescent nanodiamonds and their mechanism of cellular uptake

    Energy Technology Data Exchange (ETDEWEB)

    Vaijayanthimala, Vairakkannu; Tzeng, Yan-Kai; Chang, Huan-Cheng [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Li, Chung-Leung, E-mail: hcchang@po.sinica.edu.t, E-mail: chungL@gate.sinica.edu.t [Genomics Research Center, Academia Sinica, Taipei 115, Taiwan (China)

    2009-10-21

    The labeling of cells with fluorescent nanoparticles is promising for various biomedical applications. The objective of this study is to evaluate the biocompatibility and the mechanism of the cellular uptake of fluorescent nanodiamonds (FNDs) in cancer cells (HeLa) and pre-adipocytes (3T3-L1). With flow cytometry and the use of a battery of metabolic and cytoskeletal inhibitors, we found that the mechanism of the FND uptake in both cells is by energy-dependent clathrin-mediated endocytosis. In addition, the surface charge of FND influences its cellular uptake, as the uptake of poly-L-lysine-coated FNDs is better than that of oxidative-acid-purified FNDs at the same concentration in regular medium with or without serum. We also confirm that the proliferative potential of FND-treated and untreated cells does not exhibit any significant differences when measured at bulk cultures, and more stringently at clonal cell density. Further biocompatibility studies indicate that the in vitro differentiation of 3T3-L1 pre-adipocytes and 489-2 osteoprogenitors is not affected by the FND treatment. Our results show that FNDs are biocompatible and ideal candidates for potential applications in human stem cell research.

  11. Homotopic non-local regularized reconstruction from sparse positron emission tomography measurements

    International Nuclear Information System (INIS)

    Wong, Alexander; Liu, Chenyi; Wang, Xiao Yu; Fieguth, Paul; Bie, Hongxia

    2015-01-01

    Positron emission tomography scanners collect measurements of a patient’s in vivo radiotracer distribution. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide (tracer), which is introduced into the body on a biologically active molecule, and the tomograms must be reconstructed from projections. The reconstruction of tomograms from the acquired PET data is an inverse problem that requires regularization. The use of tightly packed discrete detector rings, although improves signal-to-noise ratio, are often associated with high costs of positron emission tomography systems. Thus a sparse reconstruction, which would be capable of overcoming the noise effect while allowing for a reduced number of detectors, would have a great deal to offer. In this study, we introduce and investigate the potential of a homotopic non-local regularization reconstruction framework for effectively reconstructing positron emission tomograms from such sparse measurements. Results obtained using the proposed approach are compared with traditional filtered back-projection as well as expectation maximization reconstruction with total variation regularization. A new reconstruction method was developed for the purpose of improving the quality of positron emission tomography reconstruction from sparse measurements. We illustrate that promising reconstruction performance can be achieved for the proposed approach even at low sampling fractions, which allows for the use of significantly fewer detectors and have the potential to reduce scanner costs

  12. Statistical Evidence for the Preference of Frailty Distributions with Regularly-Varying-at-Zero Densities

    DEFF Research Database (Denmark)

    Missov, Trifon I.; Schöley, Jonas

    to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...

  13. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  14. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  15. Pavlovian Prisoner's Dilemma in one-dimensional cellular automata: analytical results, the quasi-regular phase, spatio-temporal patterns and parameter space exploration

    OpenAIRE

    Pereira, Marcelo Alves; Martinez, Alexandre Souto

    2009-01-01

    The Prisoner's Dilemma (PD) game is used in several fields due to the emergence of cooperation among selfish players. Here, we have considered a one-dimensional lattice, where each cell represents a player, that can cooperate or defect. This one-dimensional geometry allows us to retrieve the results obtained for regular lattices and to keep track of the system spatio-temporal evolution. Players play PD with their neighbors and update their state using the Pavlovian Evolutionary Strategy. If t...

  16. Simulation of a plane wavefront propagating in cardiac tissue using a cellular automata model

    International Nuclear Information System (INIS)

    Barbosa, Carlos R Hall

    2003-01-01

    We present a detailed description of a cellular automata model for the propagation of action potential in a planar cardiac tissue, which is very fast and easy to use. The model incorporates anisotropy in the electrical conductivity and a spatial variation of the refractory time. The transmembrane potential distribution is directly derived from the cell states, and the intracellular and extracellular potential distributions are calculated for the particular case of a plane wavefront. Once the potential distributions are known, the associated current densities are calculated by Ohm's law, and the magnetic field is determined at a plane parallel to the cardiac tissue by applying the law of Biot and Savart. The results obtained for propagation speed and for magnetic field amplitude with the cellular automata model are compared with values predicted by the bidomain formulation, for various angles between wavefront propagation and fibre direction, characterizing excellent agreement between the models

  17. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  18. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  19. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  20. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  1. Flaw tolerance vs. performance: A tradeoff in metallic glass cellular structures

    International Nuclear Information System (INIS)

    Chen, Wen; Liu, Ze; Robinson, Hannah Mae; Schroers, Jan

    2014-01-01

    Stochastic cellular structures are prevalent in nature and engineering materials alike. They are difficult to manipulate and study systematically and almost always contain imperfections. To design and characterize various degrees of imperfections in perfect periodic, stochastic and natural cellular structures, we fabricate a broad range of metallic glass cellular structures from perfectly periodic to highly stochastic by using a novel artificial microstructure approach based on thermoplastic replication of metallic glasses. For these cellular structures, precisely controlled imperfections are implemented and their effects on the mechanical response are evaluated. It is found that the mechanical performance of the periodic structures is generally superior to that of the stochastic structures. However, the stochastic structures experience a much higher tolerance to flaws than the periodic structure, especially in the plastic regime. The different flaw tolerance is explained by the stress distribution within the various structures, which leads to an overall 'strain-hardening' behavior of the stochastic structure compared to a 'strain-softening' behavior in the periodic structure. Our findings reveal how structure, 'strain-hardening' and flaw tolerance are microscopically related in structural materials

  2. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  3. The cellular and subcellular localization of zinc transporter 7 in the mouse spinal cord

    Science.gov (United States)

    The present work addresses the cellular and subcellular localization of the zinc transporter 7 (ZNT7, SLC30a7) protein and the distribution of zinc ions (Zn2+) in the mouse spinal cord. Our results indicated that the ZNT7 immunoreactive neurons were widely distributed in the Rexed’s laminae of the g...

  4. NAD(H) and NADP(H) Redox Couples and Cellular Energy Metabolism.

    Science.gov (United States)

    Xiao, Wusheng; Wang, Rui-Sheng; Handy, Diane E; Loscalzo, Joseph

    2018-01-20

    The nicotinamide adenine dinucleotide (NAD + )/reduced NAD + (NADH) and NADP + /reduced NADP + (NADPH) redox couples are essential for maintaining cellular redox homeostasis and for modulating numerous biological events, including cellular metabolism. Deficiency or imbalance of these two redox couples has been associated with many pathological disorders. Recent Advances: Newly identified biosynthetic enzymes and newly developed genetically encoded biosensors enable us to understand better how cells maintain compartmentalized NAD(H) and NADP(H) pools. The concept of redox stress (oxidative and reductive stress) reflected by changes in NAD(H)/NADP(H) has increasingly gained attention. The emerging roles of NAD + -consuming proteins in regulating cellular redox and metabolic homeostasis are active research topics. The biosynthesis and distribution of cellular NAD(H) and NADP(H) are highly compartmentalized. It is critical to understand how cells maintain the steady levels of these redox couple pools to ensure their normal functions and simultaneously avoid inducing redox stress. In addition, it is essential to understand how NAD(H)- and NADP(H)-utilizing enzymes interact with other signaling pathways, such as those regulated by hypoxia-inducible factor, to maintain cellular redox homeostasis and energy metabolism. Additional studies are needed to investigate the inter-relationships among compartmentalized NAD(H)/NADP(H) pools and how these two dinucleotide redox couples collaboratively regulate cellular redox states and cellular metabolism under normal and pathological conditions. Furthermore, recent studies suggest the utility of using pharmacological interventions or nutrient-based bioactive NAD + precursors as therapeutic interventions for metabolic diseases. Thus, a better understanding of the cellular functions of NAD(H) and NADP(H) may facilitate efforts to address a host of pathological disorders effectively. Antioxid. Redox Signal. 28, 251-272.

  5. Intracellular fate of Ureaplasma parvum entrapped by host cellular autophagy.

    Science.gov (United States)

    Nishiumi, Fumiko; Ogawa, Michinaga; Nakura, Yukiko; Hamada, Yusuke; Nakayama, Masahiro; Mitobe, Jiro; Hiraide, Atsushi; Sakai, Norio; Takeuchi, Makoto; Yoshimori, Tamotsu; Yanagihara, Itaru

    2017-06-01

    Genital mycoplasmas, including Ureaplasma spp., are among the smallest human pathogenic bacteria and are associated with preterm birth. Electron microscopic observation of U. parvum showed that these prokaryotes have a regular, spherical shape with a mean diameter of 146 nm. U. parvum was internalized into HeLa cells by clathrin-mediated endocytosis and survived for at least 14 days around the perinuclear region. Intracellular U. parvum reached endosomes in HeLa cells labeled with EEA1, Rab7, and LAMP-1 within 1 to 3 hr. After 3 hr of infection, U. parvum induced the cytosolic accumulation of galectin-3 and was subsequently entrapped by the autophagy marker LC3. However, when using atg7 -/- MEF cells, autophagy was inadequate for the complete elimination of U. parvum in HeLa cells. U. parvum also colocalized with the recycling endosome marker Rab11. Furthermore, the exosomes purified from infected HeLa cell culture medium included U. parvum. In these purified exosomes ureaplasma lipoprotein multiple banded antigen, host cellular annexin A2, CD9, and CD63 were detected. This research has successfully shown that Ureaplasma spp. utilize the host cellular membrane compartments possibly to evade the host immune system. © 2017 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.

  6. Scaling Non-Regular Shared-Memory Codes by Reusing Custom Loop Schedules

    Directory of Open Access Journals (Sweden)

    Dimitrios S. Nikolopoulos

    2003-01-01

    Full Text Available In this paper we explore the idea of customizing and reusing loop schedules to improve the scalability of non-regular numerical codes in shared-memory architectures with non-uniform memory access latency. The main objective is to implicitly setup affinity links between threads and data, by devising loop schedules that achieve balanced work distribution within irregular data spaces and reusing them as much as possible along the execution of the program for better memory access locality. This transformation provides a great deal of flexibility in optimizing locality, without compromising the simplicity of the shared-memory programming paradigm. In particular, the programmer does not need to explicitly distribute data between processors. The paper presents practical examples from real applications and experiments showing the efficiency of the approach.

  7. Efficient rare-event simulation for multiple jump events in regularly varying random walks and compound Poisson processes

    NARCIS (Netherlands)

    B. Chen (Bohan); J. Blanchet; C.H. Rhee (Chang-Han); A.P. Zwart (Bert)

    2017-01-01

    textabstractWe propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges

  8. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  9. 3D cellular structures and co-cultures formed through the contactless magnetic manipulation of cells on adherent surfaces.

    Science.gov (United States)

    Abdel Fattah, Abdel Rahman; Mishriki, Sarah; Kammann, Tobias; Sahu, Rakesh P; Geng, Fei; Puri, Ishwar K

    2018-02-27

    A magnet array is employed to manipulate diamagnetic cells that are contained in paramagnetic medium to demonstrate for the first time the contactless bioprinting of three-dimensional (3D) cellular structures and co-cultures of breast cancer MCF-7 and endothelial HUVEC at prescribed locations on tissue culture treated well plates. Sequential seeding of different cell lines and the spatial displacement of the magnet array creates co-cultured cellular structures within a well without using physically intrusive well inserts. Both monotypic and co-culture experiments produce morphologically rich 3D cell structures that are otherwise absent in regular monolayer cell cultures. The magnetic contactless bioprinting of cells provides further insight into cell behaviour, invasion strategies and transformations that are useful for potential applications in drug screening, 3D cell culture formation and tissue engineering.

  10. Regularized inversion of controlled source and earthquake data

    International Nuclear Information System (INIS)

    Ramachandran, Kumar

    2012-01-01

    Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)

  11. Cellular MR Imaging

    Directory of Open Access Journals (Sweden)

    Michel Modo

    2005-07-01

    Full Text Available Cellular MR imaging is a young field that aims to visualize targeted cells in living organisms. In order to provide a different signal intensity of the targeted cell, they are either labeled with MR contrast agents in vivo or prelabeled in vitro. Either (ultrasmall superparamagnetic iron oxide [(USPIO] particles or (polymeric paramagnetic chelates can be used for this purpose. For in vivo cellular labeling, Gd3+- and Mn2+- chelates have mainly been used for targeted hepatobiliary imaging, and (USPIO-based cellular imaging has been focused on imaging of macrophage activity. Several of these magneto-pharmaceuticals have been FDA-approved or are in late-phase clinical trials. As for prelabeling of cells in vitro, a challenge has been to induce a sufficient uptake of contrast agents into nonphagocytic cells, without affecting normal cellular function. It appears that this issue has now largely been resolved, leading to an active research on monitoring the cellular biodistribution in vivo following transplantation or transfusion of these cells, including cell migration and trafficking. New applications of cellular MR imaging will be directed, for instance, towards our understanding of hematopoietic (immune cell trafficking and of novel guided (stem cell-based therapies aimed to be translated to the clinic in the future.

  12. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  13. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  14. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  15. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  16. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  17. Performance analysis of a handoff scheme for two-tier cellular CDMA networks

    Directory of Open Access Journals (Sweden)

    Ahmed Hamad

    2011-07-01

    Full Text Available A two-tier model is used in cellular networks to improve the Quality of Service (QoS, namely to reduce the blocking probability of new calls and the forced termination probability of ongoing calls. One tier, the microcells, is used for slow or stationary users, and the other, the macrocell, is used for high speed users. In Code-Division Multiple-Access (CDMA cellular systems, soft handoffs are supported, which provides ways for further QoS improvement. In this paper, we introduce such a way; namely, a channel borrowing scheme used in conjunction with a First-In-First-Out (FIFO queue in the macrocell tier. A multidimensional Markov chain to model the resulting system is established, and an iterative technique to find the steady-state probability distribution is utilized. This distribution is then used to find the performance measures of interest: new call blocking probability, and forced termination probability.

  18. Regular-chaos transition of the energy spectrum and electromagnetic transition intensities in 44V nucleus using the framework of the nuclear shell model

    International Nuclear Information System (INIS)

    Hamoudi, A.K.; Abdul Majeed Al-Rahmani, A.

    2012-01-01

    The spectral fluctuations and the statistics of electromagnetic transition intensities and electromagnetic moments in 44 V nucleus are studied by the framework of the interacting shell model, using the FPD6 as a realistic effective interaction in the isospin formalism for 4 particles move in the fp-model space with a 40 Ca core. To look for a regular-chaos transition in 44 V nucleus, we perform shell model calculations using various interaction strengths β to the off-diagonal matrix elements of the FPD6. The nearest-neighbors level spacing distribution P(s) and the distribution of electromagnetic transition intensities [such as, B(M1) and B(E2) transitions] are found to have a regular dynamic at β=0, a chaotic dynamic at β⩾0.3 and an intermediate situation at 0 3 statistic we have found a regular dynamic at β=0, a chaotic dynamic at β⩾0.4 and an intermediate situation at 0<β<0.4. It is also found that the statistics of the squares of M1 and E2 moments, which are consistent with a Porter-Thomas distribution, have no dependence on the interaction strength β.

  19. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  20. 47 CFR 22.970 - Unacceptable interference to part 90 non-cellular 800 MHz licensees from cellular radiotelephone...

    Science.gov (United States)

    2010-10-01

    ...-cellular 800 MHz licensees from cellular radiotelephone or part 90-800 MHz cellular systems. 22.970 Section... MOBILE SERVICES Cellular Radiotelephone Service § 22.970 Unacceptable interference to part 90 non-cellular 800 MHz licensees from cellular radiotelephone or part 90-800 MHz cellular systems. (a) Definition...

  1. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  2. Measuring time series regularity using nonlinear similarity-based sample entropy

    International Nuclear Information System (INIS)

    Xie Hongbo; He Weixing; Liu Hui

    2008-01-01

    Sampe Entropy (SampEn), a measure quantifying regularity and complexity, is believed to be an effective analyzing method of diverse settings that include both deterministic chaotic and stochastic processes, particularly operative in the analysis of physiological signals that involve relatively small amount of data. However, the similarity definition of vectors is based on Heaviside function, of which the boundary is discontinuous and hard, may cause some problems in the validity and accuracy of SampEn. Sigmoid function is a smoothed and continuous version of Heaviside function. To overcome the problems SampEn encountered, a modified SampEn (mSampEn) based on nonlinear Sigmoid function was proposed. The performance of mSampEn was tested on the independent identically distributed (i.i.d.) uniform random numbers, the MIX stochastic model, the Rossler map, and the Hennon map. The results showed that mSampEn was superior to SampEn in several aspects, including giving entropy definition in case of small parameters, better relative consistency, robust to noise, and more independence on record length when characterizing time series generated from either deterministic or stochastic system with different regularities

  3. Heterogeneous cellular networks

    CERN Document Server

    Hu, Rose Qingyang

    2013-01-01

    A timely publication providing coverage of radio resource management, mobility management and standardization in heterogeneous cellular networks The topic of heterogeneous cellular networks has gained momentum in industry and the research community, attracting the attention of standardization bodies such as 3GPP LTE and IEEE 802.16j, whose objectives are looking into increasing the capacity and coverage of the cellular networks. This book focuses on recent progresses,  covering the related topics including scenarios of heterogeneous network deployment, interference management i

  4. Cellular content of biomolecules in sub-seafloor microbial communities

    DEFF Research Database (Denmark)

    Braun, Stefan; Morono, Yuki; Becker, Kevin W.

    2016-01-01

    the lifetime of their microbial sources. Here we provide for the first time measurements of the cellular content of biomolecules in sedimentary microbial cells. We separated intact cells from sediment matrices in samples from surficial, deeply buried, organic-rich, and organic-lean marine sediments by density...... content. We find that the cellular content of biomolecules in the marine subsurface is up to four times lower than previous estimates. Our approach will facilitate and improve the use of biomolecules as proxies for microbial abundance in environmental samples and ultimately provide better global estimates......Microbial biomolecules, typically from the cell envelope, can provide crucial information about distribution, activity, and adaptations of sub-seafloor microbial communities. However, when cells die these molecules can be preserved in the sediment on timescales that are likely longer than...

  5. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  6. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  7. QoE-Driven D2D Media Services Distribution Scheme in Cellular Networks

    OpenAIRE

    Chen, Mingkai; Wang, Lei; Chen, Jianxin; Wei, Xin

    2017-01-01

    Device-to-device (D2D) communication has been widely studied to improve network performance and considered as a potential technological component for the next generation communication. Considering the diverse users’ demand, Quality of Experience (QoE) is recognized as a new degree of user’s satisfaction for media service transmissions in the wireless communication. Furthermore, we aim at promoting user’s Mean of Score (MOS) value to quantify and analyze user’s QoE in the dynamic cellular netw...

  8. From Matched Spatial Filtering towards the Fused Statistical Descriptive Regularization Method for Enhanced Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shkvarko Yuriy

    2006-01-01

    Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.

  9. The influence of nonuniform micro-distribution of alpha emitter on microdosimetry in cells

    International Nuclear Information System (INIS)

    Tian Yuan; Zhang Liang'an; Dai Guangfu

    2007-01-01

    Objective: To study the influence of nonuniform micro-distribution of alpha emitter on cellular S values(in the radioimmunotherapy). Methods: Emission of alpha particles is randomly simulated by Monte Carlo method; the incident energy and exit energy are calculated with interpolation technique based on the relationship between range and energy of alpha particle and the analytical Continuous Slowing Down Approximation (CSDA) model. So energy deposited in the target area can be obtained. To take 213 Po as an example, cellular S values with various cell dimensions and possible micro-distributions of radioactivity are calculated, such as linear increase, linear decrease, exponential increase and exponential decrease. Results: S values from cell to cell of uniform distribution showed no difference with the Hamacher's results. S values of different micro-distributions are distinguishing with each other. It is indicated that different micro-distributions of radioactivity will result in significant change of average chord length of alpha particles traveling in the target area, as well as the change of average stopping power over the chord, which is primary reason for differences of S values. Conclusions: The nonuniform micro-distributions show remarkable influence on cellular S values and hence should be taken consideration in cellular absorbed dose estimation, especially in microdosimetry. (authors)

  10. 47 CFR 22.909 - Cellular markets.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Cellular markets. 22.909 Section 22.909... Cellular Radiotelephone Service § 22.909 Cellular markets. Cellular markets are standard geographic areas used by the FCC for administrative convenience in the licensing of cellular systems. Cellular markets...

  11. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  12. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  13. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  14. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  15. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  16. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  17. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  18. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  19. A study of the biological effects of rare earth elements at cellular level using nuclear techniques

    International Nuclear Information System (INIS)

    Feng Zhihui; Wang Xi; Zhang Sunxi; An Lizhi; Zhang Jingxia; Yao Huiying

    2001-01-01

    Objective: To investigate the biological effects and the effecting mechanisms of rare earth elements La, Gd and Ce on cultured rat cells. Methods: The biological effects of La 3+ on cultured rat cells and the subcellular distribution of La and Gd and Ce, and the inflow of 45 Ca 2+ into the cells and total cellular calcium were measured by isotopic tracing, Proton Induced X Ray Emission Analysis (PIXE) and the techniques of biochemistry and cellular biology. Results: La 3+ at the concentration of 10- 10( or 10 -9 ) - 10 -6 mol/L significantly increased quantity of incorporation of 3 H-TdR into DNA, total cellular protein and the activity of succinic dehydrogenase of mitochondria. The cell cycle analysis showed that the proportions of cells in S phase were accordingly increased acted by La 3+ at above range of concentration. But these values were significantly decreased when concentration of La 3+ raised to 10 -4 - 10 -3 mol/L. It was further discovered that La, Gd and Ce distributed mostly in the nuclei, and then in membranes. Gd and Ce also promoted the inflow of 45 Ca 2+ into the cells and increased the total calcium content in cells. Conclusions: 1) La 3+ at a wide concentration range of 10 -10 ( or 10 -9 ) - 10 -6 mol/L promotes proliferation of cultured rat cells, but at even higher concentration (10 -4 - 10 -3 mol/L) shows cellular toxicity, and there is a striking dose-effect relationship. 2) La, Gd and Ce can enter the cells and mainly distribute in the nuclei. 3) Gd and Ce can promote the inflow of extracellular Ca 2+ into the cells and increase total cellular calcium

  20. Damage identification method for continuous girder bridges based on spatially-distributed long-gauge strain sensing under moving loads

    Science.gov (United States)

    Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi

    2018-05-01

    A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.

  1. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  2. Cellular image classification

    CERN Document Server

    Xu, Xiang; Lin, Feng

    2017-01-01

    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  3. Energy Cost Minimization in Heterogeneous Cellular Networks with Hybrid Energy Supplies

    Directory of Open Access Journals (Sweden)

    Bang Wang

    2016-01-01

    Full Text Available The ever increasing data demand has led to the significant increase of energy consumption in cellular mobile networks. Recent advancements in heterogeneous cellular networks and green energy supplied base stations provide promising solutions for cellular communications industry. In this article, we first review the motivations and challenges as well as approaches to address the energy cost minimization problem for such green heterogeneous networks. Owing to the diversities of mobile traffic and renewable energy, the energy cost minimization problem involves both temporal and spatial optimization of resource allocation. We next present a new solution to illustrate how to combine the optimization of the temporal green energy allocation and spatial mobile traffic distribution. The whole optimization problem is decomposed into four subproblems, and correspondingly our proposed solution is divided into four parts: energy consumption estimation, green energy allocation, user association, and green energy reallocation. Simulation results demonstrate that our proposed algorithm can significantly reduce the total energy cost.

  4. Modeling and Optimization of Inventory-Distribution Routing Problem for Agriculture Products Supply Chain

    OpenAIRE

    Liao, Li; Li, Jianfeng; Wu, Yaohua

    2013-01-01

    Mathematical models of inventory-distribution routing problem for two-echelon agriculture products distribution network are established, which are based on two management modes, franchise chain and regular chain, one-to-many, interval periodic order, demand depending on inventory, deteriorating treatment cost of agriculture products, start-up costs of vehicles and so forth. Then, a heuristic adaptive genetic algorithm is presented for the model of franchise chain. For the regular chain model,...

  5. Cellular oxido-reductive proteins of Chlamydomonas reinhardtii control the biosynthesis of silver nanoparticles

    Directory of Open Access Journals (Sweden)

    Barwal Indu

    2011-12-01

    Full Text Available Abstract Background Elucidation of molecular mechanism of silver nanoparticles (SNPs biosynthesis is important to control its size, shape and monodispersity. The evaluation of molecular mechanism of biosynthesis of SNPs is of prime importance for the commercialization and methodology development for controlling the shape and size (uniform distribution of SNPs. The unicellular algae Chlamydomonas reinhardtii was exploited as a model system to elucidate the role of cellular proteins in SNPs biosynthesis. Results The C. reinhardtii cell free extract (in vitro and in vivo cells mediated synthesis of silver nanoparticles reveals SNPs of size range 5 ± 1 to 15 ± 2 nm and 5 ± 1 to 35 ± 5 nm respectively. In vivo biosynthesized SNPs were localized in the peripheral cytoplasm and at one side of flagella root, the site of pathway of ATP transport and its synthesis related enzymes. This provides an evidence for the involvement of oxidoreductive proteins in biosynthesis and stabilization of SNPs. Alteration in size distribution and decrease of synthesis rate of SNPs in protein-depleted fractions confirmed the involvement of cellular proteins in SNPs biosynthesis. Spectroscopic and SDS-PAGE analysis indicate the association of various proteins on C. reinhardtii mediated in vivo and in vitro biosynthesized SNPs. We have identified various cellular proteins associated with biosynthesized (in vivo and in vitro SNPs by using MALDI-MS-MS, like ATP synthase, superoxide dismutase, carbonic anhydrase, ferredoxin-NADP+ reductase, histone etc. However, these proteins were not associated on the incubation of pre-synthesized silver nanoparticles in vitro. Conclusion Present study provides the indication of involvement of molecular machinery and various cellular proteins in the biosynthesis of silver nanoparticles. In this report, the study is mainly focused towards understanding the role of diverse cellular protein in the synthesis and capping of silver

  6. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Biomechanics of cellular solids.

    Science.gov (United States)

    Gibson, Lorna J

    2005-03-01

    Materials with a cellular structure are widespread in nature and include wood, cork, plant parenchyma and trabecular bone. Natural cellular materials are often mechanically efficient: the honeycomb-like microstructure of wood, for instance, gives it an exceptionally high performance index for resisting bending and buckling. Here we review the mechanics of a wide range of natural cellular materials and examine their role in lightweight natural sandwich structures (e.g. iris leaves) and natural tubular structures (e.g. plant stems or animal quills). We also describe two examples of engineered biomaterials with a cellular structure, designed to replace or regenerate tissue in the body.

  8. Asymmetric segregation of damaged cellular components in spatially structured multicellular organisms.

    Directory of Open Access Journals (Sweden)

    Charlotte Strandkvist

    Full Text Available The asymmetric distribution of damaged cellular components has been observed in species ranging from fission yeast to humans. To study the potential advantages of damage segregation, we have developed a mathematical model describing ageing mammalian tissue, that is, a multicellular system of somatic cells that do not rejuvenate at cell division. To illustrate the applicability of the model, we specifically consider damage incurred by mutations to mitochondrial DNA, which are thought to be implicated in the mammalian ageing process. We show analytically that the asymmetric distribution of damaged cellular components reduces the overall damage level and increases the longevity of the cell population. Motivated by the experimental reports of damage segregation in human embryonic stem cells, dividing symmetrically with respect to cell-fate, we extend the model to consider spatially structured systems of cells. Imposing spatial structure reduces, but does not eliminate, the advantage of asymmetric division over symmetric division. The results suggest that damage partitioning could be a common strategy for reducing the accumulation of damage in a wider range of cell types than previously thought.

  9. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  10. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  11. Imaging the lipidome: omega-alkynyl fatty acids for detection and cellular visualization of lipid-modified proteins.

    Science.gov (United States)

    Hannoush, Rami N; Arenas-Ramirez, Natalia

    2009-07-17

    Fatty acylation or lipid modification of proteins controls their cellular activation and diverse roles in physiology. It mediates protein-protein and protein-membrane interactions and plays an important role in regulating cellular signaling pathways. Currently, there is need for visualizing lipid modifications of proteins in cells. Herein we report novel chemical probes based on omega-alkynyl fatty acids for biochemical detection and cellular imaging of lipid-modified proteins. Our study shows that omega-alkynyl fatty acids of varying chain length are metabolically incorporated onto cellular proteins. Using fluorescence imaging, we describe the subcellular distribution of lipid-modified proteins across a panel of different mammalian cell lines and during cell division. Our results demonstrate that this methodology is a useful diagnostic tool for analyzing the lipid content of cellular proteins and for studying the dynamic behavior of lipid-modified proteins in various disease or physiological states.

  12. On the de Vaucouleurs density-radius relation and the cellular intermediate large-scale structure of the universe

    International Nuclear Information System (INIS)

    Ruffini, R.

    1989-01-01

    In the present interpretation of the de Vaucouleurs mass density relation within the framework of universal fractal and cellular structure, attention is given to the relationship of such structure to the conventionally assumed cosmological conditions of three-dimensional homogeneity and isotropy. It is noted to be possible that the degree of regularity of the fractal structure will allow the definition, for distances smaller than the upper cutoff, of a Hubble ratio; this would be a function of the distance, however, and is clearly not directly related to a cosmological interpretation. 44 refs

  13. Regularization of the Coulomb scattering problem

    International Nuclear Information System (INIS)

    Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.

    2004-01-01

    The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers

  14. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  15. Cellular Reflectarray Antenna

    Science.gov (United States)

    Romanofsky, Robert R.

    2010-01-01

    The cellular reflectarray antenna is intended to replace conventional parabolic reflectors that must be physically aligned with a particular satellite in geostationary orbit. These arrays are designed for specified geographical locations, defined by latitude and longitude, each called a "cell." A particular cell occupies nominally 1,500 square miles (3,885 sq. km), but this varies according to latitude and longitude. The cellular reflectarray antenna designed for a particular cell is simply positioned to align with magnetic North, and the antenna surface is level (parallel to the ground). A given cellular reflectarray antenna will not operate in any other cell.

  16. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  17. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  18. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  19. Regularities in the behavior of radioactive aerosols in the near-earth atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Makhonko, K.P.; Avramenko, A.S.; Martynenko, V.P.; Volokitin, A.A.; Rabotnova, F.A.

    1979-10-01

    The relationship is considered between the power of nuclear explosions and mean annual magnitudes of surface concentrations and atmospheric fallout of long-lived isotopes which are products of these explosions. The role of Chinese nuclear explosions in pollution of the atmosphere over the territory of the USSR is demonstrated. Regularities are discussed about the annual course of concentrations of products of atmospheric nuclear explosions, features in concentration distribution over the territory of the USSR as well as the impact of the amount and type of atmospheric precipitation upon the formation of radioactive fallout.

  20. Linearizable cellular automata

    International Nuclear Information System (INIS)

    Nobe, Atsushi; Yura, Fumitaka

    2007-01-01

    The initial value problem for a class of reversible elementary cellular automata with periodic boundaries is reduced to an initial-boundary value problem for a class of linear systems on a finite commutative ring Z 2 . Moreover, a family of such linearizable cellular automata is given

  1. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  2. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  3. Electromagnetic cellular interactions.

    Science.gov (United States)

    Cifra, Michal; Fields, Jeremy Z; Farhadi, Ashkan

    2011-05-01

    Chemical and electrical interaction within and between cells is well established. Just the opposite is true about cellular interactions via other physical fields. The most probable candidate for an other form of cellular interaction is the electromagnetic field. We review theories and experiments on how cells can generate and detect electromagnetic fields generally, and if the cell-generated electromagnetic field can mediate cellular interactions. We do not limit here ourselves to specialized electro-excitable cells. Rather we describe physical processes that are of a more general nature and probably present in almost every type of living cell. The spectral range included is broad; from kHz to the visible part of the electromagnetic spectrum. We show that there is a rather large number of theories on how cells can generate and detect electromagnetic fields and discuss experimental evidence on electromagnetic cellular interactions in the modern scientific literature. Although small, it is continuously accumulating. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Power Control for D2D Underlay Cellular Networks with Imperfect CSI

    KAUST Repository

    Memmi, Amen

    2017-02-09

    Device-to-Device communications underlying the cellular infrastructure is a technology that has recently been proposed as a promising solution to enhance cellular network capabilities. However, interference is the major challenge since the same resources are shared by both systems. Therefore, interference management techniques are required to keep the interference under control. In this work, in order to mitigate interference, we consider centralized and distributed power control algorithms in a one-cell random network model. Differently from previous works, we are assuming that the channel state information may be imperfect and include estimation errors. We evaluate how this uncertainty impacts performances. In the centralized approach, we derive the optimal powers that maximize the coverage probability and the rate of the cellular user while scheduling as many D2D links as possible. These powers are computed at the base station (BS) and then delivered to the users, and hence the name

  5. ANALYSIS OF REGULARITIES IN DISTRIBUTION OF EARTHQUAKES BY FOCAL DISPLACEMENT IN THE KURIL-OKHOTSK REGION BEFORE THE CATASTROPHIC SIMUSHIR EARTHQUAKE OF 15 NOVEMBER 2006

    Directory of Open Access Journals (Sweden)

    Timofei K. Zlobin

    2012-01-01

    Full Text Available The catastrophic Simushir earthquake occurred on 15 November 2006 in the Kuril-Okhotsk region in the Middle Kuril Islands which is a transition zone between the Eurasian continent and the Pacific Ocean. It was followed by numerous strong earthquakes. It is established that the catastrophic earthquake was prepared on a site characterized by increased relative effective pressures which is located at the border of the low-pressure area (Figure 1.Based on data from GlobalCMT (Harvard, earthquake focal mechanisms were reconstructed, and tectonic stresses, the seismotectonic setting and the earthquakes distribution pattern were studied for analysis of the field of stresses in the region before to the Simushir earthquake (Figures 2 and 3; Table 1.Five areas of various types of movement were determined. Three of them are stretched along the Kuril Islands. It is established that seismodislocations in earthquake focal areas are regularly distributed. In each of the determined areas, displacements of a specific type (shear or reverse shear are concentrated and give evidence of the alteration and change of zones characterized by horizontal stretching and compression.The presence of the horizontal stretching and compression zones can be explained by a model of subduction (Figure 4. Detailed studies of the state of stresses of the Kuril region confirm such zones (Figure 5. Recent GeodynamicsThe established specific features of tectonic stresses before the catastrophic Simushir earthquake of 15 November 2006 contribute to studies of earthquake forecasting problems. The state of stresses and the geodynamic conditions suggesting occurrence of new earthquakes can be assessed from the data on the distribution of horizontal compression, stretching and shear areas of the Earth’s crust and the upper mantle in the Kuril region.

  6. Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes

    International Nuclear Information System (INIS)

    Schee, Jan; Stuchlík, Zdeněk

    2015-01-01

    We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghost direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region

  7. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  8. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  9. An Analysis of Dynamic Instability on TC-Like Vortex Using the Regularization-Based Eigenmode Linear Superposition Method

    Directory of Open Access Journals (Sweden)

    Shuang Liu

    2018-01-01

    Full Text Available In this paper, the eigenmode linear superposition (ELS method based on the regularization is used to discuss the distributions of all eigenmodes and the role of their instability to the intensity and structure change in TC-like vortex. Results show that the regularization approach can overcome the ill-posed problem occurring in solving mode weight coefficients as the ELS method are applied to analyze the impacts of dynamic instability on the intensity and structure change of TC-like vortex. The Generalized Cross-validation (GCV method and the L curve method are used to determine the regularization parameters, and the results of the two approaches are compared. It is found that the results based on the GCV method are closer to the given initial condition in the solution of the inverse problem of the vortex system. Then, the instability characteristic of the hollow vortex as the basic state are examined based on the linear barotropic shallow water equations. It is shown that the wavenumber distribution of system instability obtained from the ELS method is well consistent with that of the numerical analysis based on the norm mode. On the other hand, the evolution of the hollow vortex are discussed using the product of each eigenmode and its corresponding weight coefficient. Results show that the intensity and structure change of the system are mainly affected by the dynamic instability in the early stage of disturbance development, and the most unstable mode has a dominant role in the growth rate and the horizontal distribution of intense disturbance in the near-core region. Moreover, the wave structure of the most unstable mode possesses typical characteristics of mixed vortex Rossby-inertio-gravity waves (VRIGWs.

  10. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  11. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  12. Cuttlebone-like V2O5 Nanofibre Scaffolds - Advances in Structuring Cellular Solids

    Science.gov (United States)

    Knöller, Andrea; Runčevski, Tomče; Dinnebier, Robert E.; Bill, Joachim; Burghard, Zaklina

    2017-02-01

    The synthesis of ceramic materials combining high porosity and permeability with good mechanical stability is challenging, as optimising the latter requires compromises regarding the first two properties. Nonetheless, significant progress can be made in this direction by taking advantage of the structural design principles evolved by nature. Natural cellular solids achieve good mechanical stability via a defined hierarchical organisation of the building blocks they are composed of. Here, we report the first synthetic, ceramic-based scaffold whose architecture closely mimics that of cuttlebone -a structural biomaterial whose porosity exceeds that of most other natural cellular solids, whilst preserving an excellent mechanical strength. The nanostructured, single-component scaffold, obtained by ice-templated assembly of V2O5 nanofibres, features a highly sophisticated and elaborate architecture of equally spaced lamellas, which are regularly connected by pillars as lamella support. It displays an unprecedented porosity of 99.8 %, complemented by an enhanced mechanical stability. This novel bioinspired, functional material not only displays mechanical characteristics similar to natural cuttlebone, but the multifunctionality of the V2O5 nanofibres also renders possible applications, including catalysts, sensors and electrodes for energy storage.

  13. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  14. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  15. In vitro studies of cellular response to DNA damage induced by boron neutron capture therapy

    International Nuclear Information System (INIS)

    Perona, M.; Pontiggia, O.; Carpano, M.; Thomasz, L.; Thorp, S.; Pozzi, E.; Simian, M.; Kahl, S.; Juvenal, G.; Pisarev, M.; Dagrosa, A.

    2011-01-01

    The aim of these studies was to evaluate the mechanisms of cellular response to DNA damage induced by BNCT. Thyroid carcinoma cells were incubated with 10 BPA or 10 BOPP and irradiated with thermal neutrons. The surviving fraction, the cell cycle distribution and the expression of p53 and Ku70 were analyzed. Different cellular responses were observed for each irradiated group. The decrease of Ku70 in the neutrons +BOPP group could play a role in the increase of sensitization to radiation.

  16. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  17. Student perception of the educational environment in regular and bridging nursing programs in Saudi Arabia using the Dundee Ready Educational Environment Measure.

    Science.gov (United States)

    Al Nozha, Omar Mansour; Fadel, Hani T

    2017-01-01

    Taibah University offers regular nursing (RNP) and nursing bridging (NBP) bachelor programs. We evaluated student perception of the learning environment as one means of quality assurance. To assess nursing student perception of their educational environment, to compare the perceptions of regular and bridging students, and to compare the perceptions of students in the old and new curricula. Cross-sectional survey. College of Nursing at Taibah University, Madinah, Saudi Arabia. The Dundee Ready Educational Environment Measure (DREEM) instrument was distributed to over 714 nursing students to assess perception of the educational environment. Independent samples t test and Pearson's chi square were used to compare the programs and curricula. The DREEM inventory score. Of 714 students, 271 (38%) were RNP students and 443 (62%) were NBP students. The mean (standard deviation) DREEM score was 111 (25). No significant differences were observed between the programs except for the domain "academic self-perceptions" being higher in RNP students (P .05). Nursing students generally perceived their learning environment as more positive than negative. Regular students were more positive than bridging students. Students who experienced the new curriculum were more positive towards learning. The cross-sectional design and unequal gender and study level distributions may limit generalizability of the results. Longitudinal, large-scale studies with more even distributions of participant characteristics are needed.

  18. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  19. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  20. Stochastic cellular automata model of cell migration, proliferation and differentiation: validation with in vitro cultures of muscle satellite cells.

    Science.gov (United States)

    Garijo, N; Manzano, R; Osta, R; Perez, M A

    2012-12-07

    Cell migration and proliferation has been modelled in the literature as a process similar to diffusion. However, using diffusion models to simulate the proliferation and migration of cells tends to create a homogeneous distribution in the cell density that does not correlate to empirical observations. In fact, the mechanism of cell dispersal is not diffusion. Cells disperse by crawling or proliferation, or are transported in a moving fluid. The use of cellular automata, particle models or cell-based models can overcome this limitation. This paper presents a stochastic cellular automata model to simulate the proliferation, migration and differentiation of cells. These processes are considered as completely stochastic as well as discrete. The model developed was applied to predict the behaviour of in vitro cell cultures performed with adult muscle satellite cells. Moreover, non homogeneous distribution of cells has been observed inside the culture well and, using the above mentioned stochastic cellular automata model, we have been able to predict this heterogeneous cell distribution and compute accurate quantitative results. Differentiation was also incorporated into the computational simulation. The results predicted the myotube formation that typically occurs with adult muscle satellite cells. In conclusion, we have shown how a stochastic cellular automata model can be implemented and is capable of reproducing the in vitro behaviour of adult muscle satellite cells. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  2. Accumulation, subcellular distribution and toxicity of inorganic mercury and methylmercury in marine phytoplankton

    Energy Technology Data Exchange (ETDEWEB)

    Wu Yun [Division of Life Science, Hong Kong University of Science and Technology (HKUST), Clear Water Bay, Kowloon (Hong Kong); Wang Wenxiong, E-mail: wwang@ust.hk [Division of Life Science, Hong Kong University of Science and Technology (HKUST), Clear Water Bay, Kowloon (Hong Kong)

    2011-10-15

    We examined the accumulation, subcellular distribution, and toxicity of Hg(II) and MeHg in three marine phytoplankton (the diatom Thalassiosira pseudonana, the green alga Chlorella autotrophica, and the flagellate Isochrysis galbana). For MeHg, the inter-species toxic difference could be best interpreted by the total cellular or intracellular accumulation. For Hg(II), both I. galbana and T. pseudonana exhibited similar sensitivity, but they each accumulated a different level of Hg(II). A higher percentage of Hg(II) was bound to the cellular debris fraction in T. pseudonana than in I. galbana, implying that the cellular debris may play an important role in Hg(II) detoxification. Furthermore, heat-stable proteins were a major binding pool for MeHg, while the cellular debris was an important binding pool for Hg(II). Elucidating the different subcellular fates of Hg(II) and MeHg may help us understand their toxicity in marine phytoplankton at the bottom of aquatic food chains. - Highlights: > The inter-species toxic difference of methylmercury in marine phytoplankton can be explained by its total cellular or intracellular accumulation. > The inter-species toxic difference of inorganic mercury in marine phytoplankton can be explained by its subcellular distribution. > Heat-stable protein was a major binding pool for MeHg, while the cellular debris was an important binding pool for Hg(II). - The inter-species difference in methylmercury and inorganic mercury toxicity in phytoplankton can be explained by cellular accumulation and subcellular distribution.

  3. Accumulation, subcellular distribution and toxicity of inorganic mercury and methylmercury in marine phytoplankton

    International Nuclear Information System (INIS)

    Wu Yun; Wang Wenxiong

    2011-01-01

    We examined the accumulation, subcellular distribution, and toxicity of Hg(II) and MeHg in three marine phytoplankton (the diatom Thalassiosira pseudonana, the green alga Chlorella autotrophica, and the flagellate Isochrysis galbana). For MeHg, the inter-species toxic difference could be best interpreted by the total cellular or intracellular accumulation. For Hg(II), both I. galbana and T. pseudonana exhibited similar sensitivity, but they each accumulated a different level of Hg(II). A higher percentage of Hg(II) was bound to the cellular debris fraction in T. pseudonana than in I. galbana, implying that the cellular debris may play an important role in Hg(II) detoxification. Furthermore, heat-stable proteins were a major binding pool for MeHg, while the cellular debris was an important binding pool for Hg(II). Elucidating the different subcellular fates of Hg(II) and MeHg may help us understand their toxicity in marine phytoplankton at the bottom of aquatic food chains. - Highlights: → The inter-species toxic difference of methylmercury in marine phytoplankton can be explained by its total cellular or intracellular accumulation. → The inter-species toxic difference of inorganic mercury in marine phytoplankton can be explained by its subcellular distribution. → Heat-stable protein was a major binding pool for MeHg, while the cellular debris was an important binding pool for Hg(II). - The inter-species difference in methylmercury and inorganic mercury toxicity in phytoplankton can be explained by cellular accumulation and subcellular distribution.

  4. Structure and Electromagnetic Properties of Cellular Glassy Carbon Monoliths with Controlled Cell Size

    Directory of Open Access Journals (Sweden)

    Andrzej Szczurek

    2018-05-01

    Full Text Available Electromagnetic shielding is a topic of high importance for which lightweight materials are highly sought. Porous carbon materials can meet this goal, but their structure needs to be controlled as much as possible. In this work, cellular carbon monoliths of well-defined porosity and cell size were prepared by a template method, using sacrificial paraffin spheres as the porogen and resorcinol-formaldehyde (RF resin as the carbon precursor. Physicochemical studies were carried out for investigating the conversion of RF resin into carbon, and the final cellular monoliths were investigated in terms of elemental composition, total porosity, surface area, micropore volumes, and micro/macropore size distributions. Electrical and electromagnetic (EM properties were investigated in the static regime and in the Ka-band, respectively. Due to the phenolic nature of the resin, the resultant carbon was glasslike, and the special preparation protocol that was used led to cellular materials whose cell size increased with density. The materials were shown to be relevant for EM shielding, and the relationships between those properties and the density/cell size of those cellular monoliths were elucidated.

  5. TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY

    International Nuclear Information System (INIS)

    Crotts, Arlin P. S.

    2009-01-01

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  6. Obtaining sparse distributions in 2D inverse problems

    OpenAIRE

    Reci, A; Sederman, Andrew John; Gladden, Lynn Faith

    2017-01-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxat...

  7. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  8. The Neurona at Home project: Simulating a large-scale cellular automata brain in a distributed computing environment

    Science.gov (United States)

    Acedo, L.; Villanueva-Oller, J.; Moraño, J. A.; Villanueva, R.-J.

    2013-01-01

    The Berkeley Open Infrastructure for Network Computing (BOINC) has become the standard open source solution for grid computing in the Internet. Volunteers use their computers to complete an small part of the task assigned by a dedicated server. We have developed a BOINC project called Neurona@Home whose objective is to simulate a cellular automata random network with, at least, one million neurons. We consider a cellular automata version of the integrate-and-fire model in which excitatory and inhibitory nodes can activate or deactivate neighbor nodes according to a set of probabilistic rules. Our aim is to determine the phase diagram of the model and its behaviour and to compare it with the electroencephalographic signals measured in real brains.

  9. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  10. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  11. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  12. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  13. Cellular Particle Dynamics simulation of biomechanical relaxation processes of multi-cellular systems

    Science.gov (United States)

    McCune, Matthew; Kosztin, Ioan

    2013-03-01

    Cellular Particle Dynamics (CPD) is a theoretical-computational-experimental framework for describing and predicting the time evolution of biomechanical relaxation processes of multi-cellular systems, such as fusion, sorting and compression. In CPD, cells are modeled as an ensemble of cellular particles (CPs) that interact via short range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through numerical integration of their equations of motion. Here we present CPD simulation results for the fusion of both spherical and cylindrical multi-cellular aggregates. First, we calibrate the relevant CPD model parameters for a given cell type by comparing the CPD simulation results for the fusion of two spherical aggregates to the corresponding experimental results. Next, CPD simulations are used to predict the time evolution of the fusion of cylindrical aggregates. The latter is relevant for the formation of tubular multi-cellular structures (i.e., primitive blood vessels) created by the novel bioprinting technology. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  14. Device-to-Device Underlay Cellular Networks with Uncertain Channel State Information

    KAUST Repository

    Memmi, Amen

    2016-01-06

    Device-to-Device (D2D) communications underlying the cellular infrastructure is a technology that has recently been proposed as a promising solution to enhance cellular network capabilities: It improves spectrum utilization, overall throughput and energy efficiency while enabling new peer-to-peer and location-based applications and services. However, interference is the major challenge since the same resources are shared by both systems. Therefore, interference management techniques are required to keep the interference under control. In this work, in order to mitigate interference, we consider centralized and distributed power control algorithms in a one-cell random network model. Differently from previous works, we are assuming that the channel state information (CSI) may be imperfect and include estimation errors. We evaluate how this uncertainty impacts performances.

  15. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  16. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  17. Topology optimization of adaptive fluid-actuated cellular structures with arbitrary polygonal motor cells

    International Nuclear Information System (INIS)

    Lv, Jun; Tang, Liang; Li, Wenbo; Liu, Lei; Zhang, Hongwu

    2016-01-01

    This paper mainly focuses on the fast and efficient design method for plant bioinspired fluidic cellular materials and structures composed of polygonal motor cells. Here we developed a novel structural optimization method with arbitrary polygonal coarse-grid elements based on multiscale finite element frameworks. The fluidic cellular structures are meshed with irregular polygonal coarse-grid elements according to their natural size and the shape of the imbedded motor cells. The multiscale base functions of solid displacement and hydraulic pressure are then constructed to bring the small-scale information of the irregular motor cells to the large-scale simulations on the polygonal coarse-grid elements. On this basis, a new topology optimization method based on the resulting polygonal coarse-grid elements is proposed to determine the optimal distributions or number of motor cells in the smart cellular structures. Three types of optimization problems are solved according to the usages of the fluidic cellular structures. Firstly, the proposed optimization method is utilized to minimize the system compliance of the load-bearing fluidic cellular structures. Second, the method is further extended to design biomimetic compliant actuators of the fluidic cellular materials due to the fact that non-uniform volume expansions of fluid in the cells can induce elastic action. Third, the optimization problem focuses on the weight minimization of the cellular structure under the constraints for the compliance of the whole system. Several representative examples are investigated to validate the effectiveness of the proposed polygon-based topology optimization method of the smart materials. (paper)

  18. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  19. Applications of Gene Editing Technologies to Cellular Therapies.

    Science.gov (United States)

    Rein, Lindsay A M; Yang, Haeyoon; Chao, Nelson J

    2018-03-27

    Hematologic malignancies are characterized by genetic heterogeneity, making classic gene therapy with a goal of correcting 1 genetic defect ineffective in many of these diseases. Despite initial tribulations, gene therapy, as a field, has grown by leaps and bounds with the recent development of gene editing techniques including zinc finger nucleases, transcription activator-like effector nucleases, and clustered regularly interspaced short palindromic repeat (CRISPR) sequences and CRISPR-associated protein-9 (Cas9) nuclease or CRISPR/Cas9. These novel technologies have been applied to efficiently and specifically modify genetic information in target and effector cells. In particular, CRISPR/Cas9 technology has been applied to various hematologic malignancies and has also been used to modify and improve chimeric antigen receptor-modified T cells for the purpose of providing effective cellular therapies. Although gene editing is in its infancy in malignant hematologic diseases, there is much room for growth and application in the future. Copyright © 2018 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  20. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  1. Dynamic behavior of cellular materials and cellular structures: Experiments and modeling

    Science.gov (United States)

    Gao, Ziyang

    Cellular solids, including cellular materials and cellular structures (CMS), have attracted people's great interests because of their low densities and novel physical, mechanical, thermal, electrical and acoustic properties. They offer potential for lightweight structures, energy absorption, thermal management, etc. Therefore, the studies of cellular solids have become one of the hottest research fields nowadays. From energy absorption point of view, any plastically deformed structures can be divided into two types (called type I and type II), and the basic cells of the CMS may take the configurations of these two types of structures. Accordingly, separated discussions are presented in this thesis. First, a modified 1-D model is proposed and numerically solved for a typical type II structure. Good agreement is achieved with the previous experimental data, hence is used to simulate the dynamic behavior of a type II chain. Resulted from different load speeds, interesting collapse modes are observed, and the parameters which govern the cell's post-collapse behavior are identified through a comprehensive non-dimensional analysis on general cellular chains. Secondly, the MHS specimens are chosen as an example of type I foam materials because of their good uniformity of the cell geometry. An extensive experimental study was carried out, where more attention was paid to their responses to dynamic loadings. Great enhancement of the stress-strain curve was observed in dynamic cases, and the energy absorption capacity is found to be several times higher than that of the commercial metal foams. Based on the experimental study, finite elemental simulations and theoretical modeling are also conducted, achieving good agreements and demonstrating the validities of those models. It is believed that the experimental, numerical and analytical results obtained in the present study will certainly deepen the understanding of the unsolved fundamental issues on the mechanical behavior of

  2. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  3. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  4. Statistical mechanics of cellular automata

    International Nuclear Information System (INIS)

    Wolfram, S.

    1983-01-01

    Cellular automata are used as simple mathematical models to investigate self-organization in statistical mechanics. A detailed analysis is given of ''elementary'' cellular automata consisting of a sequence of sites with values 0 or 1 on a line, with each site evolving deterministically in discrete time steps according to p definite rules involving the values of its nearest neighbors. With simple initial configurations, the cellular automata either tend to homogeneous states, or generate self-similar patterns with fractal dimensions approx. =1.59 or approx. =1.69. With ''random'' initial configurations, the irreversible character of the cellular automaton evolution leads to several self-organization phenomena. Statistical properties of the structures generated are found to lie in two universality classes, independent of the details of the initial state or the cellular automaton rules. More complicated cellular automata are briefly considered, and connections with dynamical systems theory and the formal theory of computation are discussed

  5. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  6. [Distribution and spatial ordering of biopolymer molecules in resting bacterial spores].

    Science.gov (United States)

    Duda, V I; Korolev, Iu N; El'-Registan, G I; Duzha, M V; Telegin, N L

    1978-01-01

    The presence, distribution and spatial arrangement of biopolymers in situ were studied in both a total intact spore and in a certain cellular layer using a spectroscopic technique of attenuated total refraction (ATR-IR) in the IR region. In contrast to vegetative cells, intact spores were characterized by isotropic distribution of protein components. This feature can be regarded as an index of the cryptobiotic state of spores. However, the distribution of protein components among individual layers of a spore was anisotropic. Bonds characterized by amide I and amide II bands were most often ordered in a layer which comprised cellular structures from the exosporium to the inner spore membrane.

  7. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  8. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.

    2015-08-18

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  9. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  10. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  11. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  12. Wireless Cellular Mobile Communications

    OpenAIRE

    Zalud, V.

    2002-01-01

    In this article is briefly reviewed the history of wireless cellular mobile communications, examined the progress in current second generation (2G) cellular standards and discussed their migration to the third generation (3G). The European 2G cellular standard GSM and its evolution phases GPRS and EDGE are described somewhat in detail. The third generation standard UMTS taking up on GSM/GPRS core network and equipped with a new advanced access network on the basis of code division multiple ac...

  13. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  14. MSAT and cellular hybrid networking

    Science.gov (United States)

    Baranowsky, Patrick W., II

    Westinghouse Electric Corporation is developing both the Communications Ground Segment and the Series 1000 Mobile Phone for American Mobile Satellite Corporation's (AMSC's) Mobile Satellite (MSAT) system. The success of the voice services portion of this system depends, to some extent, upon the interoperability of the cellular network and the satellite communication circuit switched communication channels. This paper will describe the set of user-selectable cellular interoperable modes (cellular first/satellite second, etc.) provided by the Mobile Phone and described how they are implemented with the ground segment. Topics including roaming registration and cellular-to-satellite 'seamless' call handoff will be discussed, along with the relevant Interim Standard IS-41 Revision B Cellular Radiotelecommunications Intersystem Operations and IOS-553 Mobile Station - Land Station Compatibility Specification.

  15. Top-down cellular pyramids

    Energy Technology Data Exchange (ETDEWEB)

    Wu, A Y; Rosenfeld, A

    1983-10-01

    A cellular pyramid is an exponentially tapering stack of arrays of processors (cells), where each cell is connected to its neighbors (siblings) on its own level, to a parent on the level above, and to its children on the level below. It is shown that in some situations, if information flows top-down only, from fathers to sons, then a cellular pyramid may be no faster than a one-level cellular array; but it may be possible to use simpler cells in the pyramid case. 23 references.

  16. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  17. Point process models for localization and interdependence of punctate cellular structures.

    Science.gov (United States)

    Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F

    2016-07-01

    Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.

  18. Energy Sharing Framework for Microgrid-Powered Cellular Base Stations

    KAUST Repository

    Farooq, Muhammad Junaid

    2017-02-07

    Cellular base stations (BSs) are increasingly becoming equipped with renewable energy generators to reduce operational expenditures and carbon footprint of wireless communications. Moreover, advancements in the traditional electricity grid allow two-way power flow and metering that enable the integration of distributed renewable energy generators at BS sites into a microgrid. In this paper, we develop an optimized energy management framework for microgrid-connected cellular BSs that are equipped with renewable energy generators and finite battery storage to minimize energy cost. The BSs share excess renewable energy with others to reduce the dependency on the conventional electricity grid. Three cases are investigated where the renewable energy generation is unknown, perfectly known, and partially known ahead of time. For the partially known case where only the statistics of renewable energy generation are available, stochastic programming is used to achieve a conservative solution. Results show the time varying energy management behaviour of the BSs and the effect of energy sharing between them.

  19. [Cellular subcutaneous tissue. Anatomic observations].

    Science.gov (United States)

    Marquart-Elbaz, C; Varnaison, E; Sick, H; Grosshans, E; Cribier, B

    2001-11-01

    We showed in a companion paper that the definition of the French "subcutaneous cellular tissue" considerably varied from the 18th to the end of the 20th centuries and has not yet reached a consensus. To address the anatomic reality of this "subcutaneous cellular tissue", we investigated the anatomic structures underlying the fat tissue in normal human skin. Sixty specimens were excised from the surface to the deep structures (bone, muscle, cartilage) on different body sites of 3 cadavers from the Institut d'Anatomie Normale de Strasbourg. Samples were paraffin-embedded, stained and analysed with a binocular microscope taking x 1 photographs. Specimens were also excised and fixed after subcutaneous injection of Indian ink, after mechanic tissue splitting and after performing artificial skin folds. The aspects of the deep parts of the skin greatly varied according to their anatomic localisation. Below the adipose tissue, we often found a lamellar fibrous layer which extended from the interlobular septa and contained horizontally distributed fat cells. No specific tissue below the hypodermis was observed. Artificial skin folds concerned either exclusively the dermis, when they were superficial or included the hypodermis, but no specific structure was apparent in the center of the fold. India ink diffused to the adipose tissue, mainly along the septa, but did not localise in a specific subcutaneous compartment. This study shows that the histologic aspects of the deep part of the skin depend mainly on the anatomic localisation. Skin is composed of epidermis, dermis and hypodermis and thus the hypodermis can not be considered as being "subcutaneous". A difficult to individualise, fibrous lamellar structure in continuity with the interlobular septa is often found under the fat lobules. This structure is a cleavage line, as is always the case with loose connective tissues, but belongs to the hypodermis (i.e. fat tissue). No specific tissue nor any virtual space was

  20. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  1. Cellular Automata on Graphs: Topological Properties of ER Graphs Evolved towards Low-Entropy Dynamics

    Directory of Open Access Journals (Sweden)

    Marc-Thorsten Hütt

    2012-06-01

    Full Text Available Cellular automata (CA are a remarkably  efficient tool for exploring general properties of complex systems and spatiotemporal patterns arising from local rules. Totalistic cellular automata,  where the update  rules depend  only on the density of neighboring states, are at the same time a versatile  tool for exploring  dynamical  processes on graphs. Here we briefly review our previous results on cellular automata on graphs, emphasizing some systematic relationships between network architecture and dynamics identified in this way. We then extend the investigation  towards graphs obtained in a simulated-evolution procedure, starting from Erdő s–Rényi (ER graphs and selecting for low entropies of the CA dynamics. Our key result is a strong association of low Shannon entropies with a broadening of the graph’s degree distribution.

  2. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  3. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  4. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  5. Distributivity of the algebra of regular open subsets of .beta. R / R

    Czech Academy of Sciences Publication Activity Database

    Balcar, Bohuslav; Hrušák, M.

    2005-01-01

    Roč. 149, č. 1 (2005), s. 1-7 ISSN 0166-8641 R&D Projects: GA ČR(CZ) GA201/03/0933; GA ČR(CZ) GA201/02/0857 Institutional research plan: CEZ:AV0Z10190503 Keywords : distributivity of Boolean algebras * cardinal invariants of the continuum * Čech-Stone compactification Subject RIV: BA - General Mathematics Impact factor: 0.297, year: 2005

  6. Study of regularities of distributing powdered dietetic additives in coarse dispersed foodstuffs

    Directory of Open Access Journals (Sweden)

    M. Pogozhikh

    2017-12-01

    Full Text Available An important intervention in the composition of food products is enrichment of food with micronutrients. In this regard, the authors investigated how the additive with the corresponding trace element will be distributed in the food product, and in this case, in minced meat, in order to meet the human needs for microelements.Micronutrient deficiencies have a significant impact on the nutritional status and health of the population in well developed and developing countries. These deficiencies cause a delay in the growth of children, various diseases, mortality, brain damage, reduced cognitive capacity and the ability of people of all ages. The global scale of micronutrient deficiencies in dietary intakes, in particular the lack of trace elements, has led to the development of powdered dietary supplements containing essential elements that enrich the coarse-type food products to increase their nutritional value. The dietary supplement should provide the daily requirement of trace elements in the human body; therefore, it should be added to the product in a normalized amount and evenly distributed in the product.Two nuclear magnetic resonance (NMR and electron paramagnetic resonance (EPR analysis were performed to determine the distribution of the additive in food products. The analysis was carried out in two stages respectively: study of molecules mobility by measuring the spin-spin relaxation time (Т2 and spin-lattice relaxation (Т1 on a pulsed NMR spectrometer; establishment of a connection between the exponent of the amplitude of the sample A0 and its mass. Based on the data obtained, as a result of the measurement, a curve is constructed for the dependence of the amplitude of the echo signal from the value (time interval between the probing pulses. The spin label used in this work is one of the first variants of a paramagnetic probe-an easily accessible transition metal ion Mn2+. According to the constructed graphs and tomograms from the

  7. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  8. Quasi-periodicity in deep redshift surveys

    International Nuclear Information System (INIS)

    Weygaert, R. van de

    1991-01-01

    The recent result by Broadhurst et al., (1990. Nature 343, 726) showing a striking, nearly periodic, galaxy redshift distribution in a narrow pencil-beam survey, is explained within the Voronoi cellular model of clustering of galaxies. Galaxies, whose luminosities are selected from a Schechter luminosity function, are placed randomly within the walls of this cellular model. Narrow and deep, magnitude-limited, pencil-beam surveys through these structures are simulated. Some 15 per cent of these beams show that observed regular pattern, with a spacing between the peaks of the order of 105 h -1 -150 h -1 Mpc, but most pencil-beams show peaks in the redshift distribution without periodicity, so we may conclude that, even within a cellular universe, periodicity is not a common phenomenon. (author)

  9. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  10. Regular Exercise Enhances the Immune Response Against Microbial Antigens Through Up-Regulation of Toll-like Receptor Signaling Pathways

    Directory of Open Access Journals (Sweden)

    Qishi Zheng

    2015-09-01

    Full Text Available Background/Aims: Regular physical exercise can enhance resistance to many microbial infections. However, little is known about the mechanism underlying the changes in the immune system induced by regular exercise. Methods: We recruited members of a university badminton club as the regular exercise (RE group and healthy sedentary students as the sedentary control (SC group. We investigated the distribution of peripheral blood mononuclear cell (PBMC subsets and functions in the two groups. Results: There were no significant differences in plasma cytokine levels between the RE and SC groups in the true resting state. However, enhanced levels of IFN-γ, TNF-α, IL-6, IFN-α and IL-12 were secreted by PBMCs in the RE group following microbial antigen stimulation, when compared to the SC group. In contrast, the levels of TNF-α and IL-6 secreted by PBMC in the RE group were suppressed compared with those in SC group following non-microbial antigen stimulation (concanavalin A or α-galactosylceramide. Furthermore, PBMC expression of TLR2, TLR7 and MyD88 was significantly increased in the RE group in response to microbial antigen stimulation. Conclusion: Regular exercise enhances immune cell activation in response to pathogenic stimulation leading to enhanced cytokine production mediated via the TLR signaling pathways.

  11. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    Science.gov (United States)

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  12. Cellular automata analysis and applications

    CERN Document Server

    Hadeler, Karl-Peter

    2017-01-01

    This book focuses on a coherent representation of the main approaches to analyze the dynamics of cellular automata. Cellular automata are an inevitable tool in mathematical modeling. In contrast to classical modeling approaches as partial differential equations, cellular automata are straightforward to simulate but hard to analyze. In this book we present a review of approaches and theories that allow the reader to understand the behavior of cellular automata beyond simulations. The first part consists of an introduction of cellular automata on Cayley graphs, and their characterization via the fundamental Cutis-Hedlund-Lyndon theorems in the context of different topological concepts (Cantor, Besicovitch and Weyl topology). The second part focuses on classification results: What classification follows from topological concepts (Hurley classification), Lyapunov stability (Gilman classification), and the theory of formal languages and grammars (Kůrka classification). These classifications suggest to cluster cel...

  13. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    Science.gov (United States)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  14. Influence of the radio-tracer used in diagnostic nuclear medicine upon the dose at the nucleus of cellular localisation

    International Nuclear Information System (INIS)

    Gardin, I.; Faraggi, M.; Stievenart, J.L; Le Guludec, D.; Bok, B.

    1997-01-01

    In the classical dosimetry one supposes a uniform distribution of the radio-pharmaceuticals at the source organ level as well as a homogeneous distribution of the absorbed dose. This hypotheses are not always verified in biology, and the influence of the tracer localisation on the dose delivered at the cellular nucleus has been studied. The average dose delivered by the electron emission of different radio-isotopes used in diagnosis has been calculated by taking into account the radioactivity localized upon the target cell (Dself), and upon the neighbouring cells (Dcross). Nuclear, cytoplasmic and membranous localizations of the tracer were simulated for different cellular sizes. In the particular case of 99m Tc and cells of nuclear radius about 4 μm and cellular radios about 8 μ, Dcross is independent of the intra-cellular localisation of the tracer. On the contrary, for a nuclear localisation Dself is 52 and 157 times more important than for the cytoplasmic and membranous localisation, respectively. The dose at the cellular nucleus due to electron emission of 99m Tc is under-estimated by a factor 2.6 by classical dosimetry when the radioactivity is nuclear. On the contrary, the classical model over-estimates by a factor 1.2 the dose at nucleus for cytoplasmic and membranous localizations. This study shows that the dose delivered at cellular nucleus by the electron emissions of 99m Tc depends on the localisation of the tracer. The modelling proposed allows a better evaluation of the radiobiological hazards related to the administration of radiopharmaceuticals in diagnostic nuclear medicine

  15. Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions

    Science.gov (United States)

    Noll, Jennifer; Shaughnessy, J. Michael

    2012-01-01

    Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…

  16. Size Distribution Imaging by Non-Uniform Oscillating-Gradient Spin Echo (NOGSE MRI.

    Directory of Open Access Journals (Sweden)

    Noam Shemesh

    Full Text Available Objects making up complex porous systems in Nature usually span a range of sizes. These size distributions play fundamental roles in defining the physicochemical, biophysical and physiological properties of a wide variety of systems - ranging from advanced catalytic materials to Central Nervous System diseases. Accurate and noninvasive measurements of size distributions in opaque, three-dimensional objects, have thus remained long-standing and important challenges. Herein we describe how a recently introduced diffusion-based magnetic resonance methodology, Non-Uniform-Oscillating-Gradient-Spin-Echo (NOGSE, can determine such distributions noninvasively. The method relies on its ability to probe confining lengths with a (length6 parametric sensitivity, in a constant-time, constant-number-of-gradients fashion; combined, these attributes provide sufficient sensitivity for characterizing the underlying distributions in μm-scaled cellular systems. Theoretical derivations and simulations are presented to verify NOGSE's ability to faithfully reconstruct size distributions through suitable modeling of their distribution parameters. Experiments in yeast cell suspensions - where the ground truth can be determined from ancillary microscopy - corroborate these trends experimentally. Finally, by appending to the NOGSE protocol an imaging acquisition, novel MRI maps of cellular size distributions were collected from a mouse brain. The ensuing micro-architectural contrasts successfully delineated distinctive hallmark anatomical sub-structures, in both white matter and gray matter tissues, in a non-invasive manner. Such findings highlight NOGSE's potential for characterizing aberrations in cellular size distributions upon disease, or during normal processes such as development.

  17. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  18. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  19. Some problems concerning the regularities in the development of the latitudinal distribution of solar magnetic fields

    International Nuclear Information System (INIS)

    Bumba, V.; Hejna, L.

    1988-01-01

    From the comparison of several modes of time development of the latitudinal distribution of solar magnetic fields, obtained by different authors using different basic observational material and different methods, the following results were obtained: At high solar latitudes (|φ|> or ∼ 40deg) all distributions agree irrespective of the method of construction. In zones of activity around the solar equator, there is a qualitatively good but quantitatively poor agreement of the integrated, directly observed fields (from Mt. Wilson Observatory) and of the highly integrated fields derived from Hα synoptic charts. The mode of field distribution at high latitudes, more uniform and unipolar, is probably different from the field distribution at low latitudes where the more concentrated leading polarity occupies practically the same area as the less concentrated following polarity fields, if they are highly integrated. The large difference between Makarov's distribution and other modes of distribution seems to be natural if we take the method of construction into account, and very probably represents its close relationship with the smaller magnetic field elements connected with newer activity, while the other types of distribution demonstrate larger-scale, redistributed, older fields. The areas covered by the positive and negative polarities on the whole Sun during the investigated one and a half solar cycles (No 20 and 21) are practically equal. (author). 5 figs., 10 refs

  20. Cellular membrane trafficking of mesoporous silica nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Fang, I-Ju [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    the specific organelle that mesoporous silica nanoparticles could approach via the identification of harvested proteins from exocytosis process. Based on the study of endo- and exocytosis behavior of mesoporous silica nanoparticle materials, we can design smarter drug delivery vehicles for cancer therapy that can be effectively controlled. The destination, uptake efficiency and the cellular distribution of mesoporous silica nanoparticle materials can be programmable. As a result, release mechanism and release rate of drug delivery systems can be a well-controlled process. The deep investigation of an endo- and exocytosis study of mesoporous silica nanoparticle materials promotes the development of drug delivery applications.

  1. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  2. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  3. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  4. MIMO Communication for Cellular Networks

    CERN Document Server

    Huang, Howard; Venkatesan, Sivarama

    2012-01-01

    As the theoretical foundations of multiple-antenna techniques evolve and as these multiple-input multiple-output (MIMO) techniques become essential for providing high data rates in wireless systems, there is a growing need to understand the performance limits of MIMO in practical networks. To address this need, MIMO Communication for Cellular Networks presents a systematic description of MIMO technology classes and a framework for MIMO system design that takes into account the essential physical-layer features of practical cellular networks. In contrast to works that focus on the theoretical performance of abstract MIMO channels, MIMO Communication for Cellular Networks emphasizes the practical performance of realistic MIMO systems. A unified set of system simulation results highlights relative performance gains of different MIMO techniques and provides insights into how best to use multiple antennas in cellular networks under various conditions. MIMO Communication for Cellular Networks describes single-user,...

  5. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  6. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  7. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  8. Cellular communications a comprehensive and practical guide

    CERN Document Server

    Tripathi, Nishith

    2014-01-01

    Even as newer cellular technologies and standards emerge, many of the fundamental principles and the components of the cellular network remain the same. Presenting a simple yet comprehensive view of cellular communications technologies, Cellular Communications provides an end-to-end perspective of cellular operations, ranging from physical layer details to call set-up and from the radio network to the core network. This self-contained source forpractitioners and students represents a comprehensive survey of the fundamentals of cellular communications and the landscape of commercially deployed

  9. Borderline personality disorder and regularly drinking alcohol before sex.

    Science.gov (United States)

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  10. Magnetohydrodynamics cellular automata

    International Nuclear Information System (INIS)

    Hatori, Tadatsugu.

    1990-02-01

    There has been a renewal of interest in cellular automata, partly because they give an architecture for a special purpose computer with parallel processing optimized to solve a particular problem. The lattice gas cellular automata are briefly surveyed, which are recently developed to solve partial differential equations such as hydrodynamics or magnetohydrodynamics. A new model is given in the present paper to implement the magnetic Lorentz force in a more deterministic and local procedure than the previous one. (author)

  11. Magnetohydrodynamic cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Hatori, Tadatsugu [National Inst. for Fusion Science, Nagoya (Japan)

    1990-03-01

    There has been a renewal of interest in cellular automata, partly because they give an architecture for a special purpose computer with parallel processing optimized to solve a particular problem. The lattice gas cellular automata are briefly surveyed, which are recently developed to solve partial differential equations such as hydrodynamics or magnetohydrodynamics. A new model is given in the present paper to implement the magnetic Lorentz force in a more deterministic and local procedure than the previous one. (author).

  12. Magnetohydrodynamic cellular automata

    International Nuclear Information System (INIS)

    Hatori, Tadatsugu

    1990-01-01

    There has been a renewal of interest in cellular automata, partly because they give an architecture for a special purpose computer with parallel processing optimized to solve a particular problem. The lattice gas cellular automata are briefly surveyed, which are recently developed to solve partial differential equations such as hydrodynamics or magnetohydrodynamics. A new model is given in the present paper to implement the magnetic Lorentz force in a more deterministic and local procedure than the previous one. (author)

  13. Modeling cellular systems

    CERN Document Server

    Matthäus, Franziska; Pahle, Jürgen

    2017-01-01

    This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  14. Generalized Bregman distances and convergence rates for non-convex regularization methods

    International Nuclear Information System (INIS)

    Grasmair, Markus

    2010-01-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ 1/p holds, if the regularization term has a slightly faster growth at zero than |t| p

  15. The cellular distribution of extracellular superoxide dismutase in macrophages is altered by cellular activation but unaffected by the natural occurring R213G substitution

    DEFF Research Database (Denmark)

    Gottfredsen, Randi Heidemann; Goldstrohm, David; Hartney, John

    2014-01-01

    and associated with the cell surface via the extracellular matrix (ECM)-binding region. Upon cellular activation induced by lipopolysaccharide, EC-SOD is relocated and detected both in the cell culture medium and in lipid raft structures. Although the secreted material presented a significantly reduced ligand......-binding capacity, this could not be correlated to proteolytic removal of the ECM-binding region, because the integrity of the material recovered from the medium was comparable to that of the cell surface-associated protein. The naturally occurring R213G amino acid substitution located in the ECM-binding region...

  16. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  17. Manufacture of Regularly Shaped Sol-Gel Pellets

    Science.gov (United States)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  18. Regularization and Complexity Control in Feed-forward Networks

    OpenAIRE

    Bishop, C. M.

    1995-01-01

    In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

  19. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  20. Cellular Angiofibroma of the Nasopharynx.

    Science.gov (United States)

    Erdur, Zülküf Burak; Yener, Haydar Murat; Yilmaz, Mehmet; Karaaltin, Ayşegül Batioğlu; Inan, Hakki Caner; Alaskarov, Elvin; Gozen, Emine Deniz

    2017-11-01

    Angiofibroma is a common tumor of the nasopharynx region but cellular type is extremely rare in head and neck. A 13-year-old boy presented with frequent epistaxis and nasal obstruction persisting for 6 months. According to the clinical symptoms and imaging studies juvenile angiofibroma was suspected. Following angiographic embolization total excision of the lesion by midfacial degloving approach was performed. Histological examination revealed that the tumor consisted of staghorn blood vessels and irregular fibrous stroma. Stellate fibroblasts with small pyknotic to large vesicular nuclei were seen in a highly cellular stroma. These findings identified cellular angiofibroma mimicking juvenile angiofibroma. This article is about a very rare patient of cellular angiofibroma of nasopharynx.

  1. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  2. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model

    International Nuclear Information System (INIS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2015-01-01

    We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies

  3. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  4. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Bioprinting-Based High-Throughput Fabrication of Three-Dimensional MCF-7 Human Breast Cancer Cellular Spheroids

    Directory of Open Access Journals (Sweden)

    Kai Ling

    2015-06-01

    Full Text Available Cellular spheroids serving as three-dimensional (3D in vitro tissue models have attracted increasing interest for pathological study and drug-screening applications. Various methods, including microwells in particular, have been developed for engineering cellular spheroids. However, these methods usually suffer from either destructive molding operations or cell loss and non-uniform cell distribution among the wells due to two-step molding and cell seeding. We have developed a facile method that utilizes cell-embedded hydrogel arrays as templates for concave well fabrication and in situ MCF-7 cellular spheroid formation on a chip. A custom-built bioprinting system was applied for the fabrication of sacrificial gelatin arrays and sequentially concave wells in a high-throughput, flexible, and controlled manner. The ability to achieve in situ cell seeding for cellular spheroid construction was demonstrated with the advantage of uniform cell seeding and the potential for programmed fabrication of tissue models on chips. The developed method holds great potential for applications in tissue engineering, regenerative medicine, and drug screening.

  6. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  7. 47 CFR 22.923 - Cellular system configuration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Cellular system configuration. 22.923 Section... MOBILE SERVICES Cellular Radiotelephone Service § 22.923 Cellular system configuration. Mobile stations... directly or through cellular repeaters. Auxiliary test stations may communicate with base or mobile...

  8. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  9. 20 CFR 226.33 - Spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  10. Female non-regular workers in Japan: their current status and health.

    Science.gov (United States)

    Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu

    2016-12-07

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.

  11. Female non-regular workers in Japan: their current status and health

    Science.gov (United States)

    INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu

    2016-01-01

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453

  12. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  13. Recursive definition of global cellular-automata mappings

    International Nuclear Information System (INIS)

    Feldberg, R.; Knudsen, C.; Rasmussen, S.

    1994-01-01

    A method for a recursive definition of global cellular-automata mappings is presented. The method is based on a graphical representation of global cellular-automata mappings. For a given cellular-automaton rule the recursive algorithm defines the change of the global cellular-automaton mapping as the number of lattice sites is incremented. A proof of lattice size invariance of global cellular-automata mappings is derived from an approximation to the exact recursive definition. The recursive definitions are applied to calculate the fractal dimension of the set of reachable states and of the set of fixed points of cellular automata on an infinite lattice

  14. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  15. Regularization of Nonmonotone Variational Inequalities

    International Nuclear Information System (INIS)

    Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.

    2006-01-01

    In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems

  16. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  17. Regular perturbations in a vector space with indefinite metric

    International Nuclear Information System (INIS)

    Chiang, C.C.

    1975-08-01

    The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)

  18. Regular Generalized Star Star closed sets in Bitopological Spaces

    OpenAIRE

    K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar

    2011-01-01

    The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.

  19. Cellular senescence and organismal aging.

    Science.gov (United States)

    Jeyapalan, Jessie C; Sedivy, John M

    2008-01-01

    Cellular senescence, first observed and defined using in vitro cell culture studies, is an irreversible cell cycle arrest which can be triggered by a variety of factors. Emerging evidence suggests that cellular senescence acts as an in vivo tumor suppression mechanism by limiting aberrant proliferation. It has also been postulated that cellular senescence can occur independently of cancer and contribute to the physiological processes of normal organismal aging. Recent data have demonstrated the in vivo accumulation of senescent cells with advancing age. Some characteristics of senescent cells, such as the ability to modify their extracellular environment, could play a role in aging and age-related pathology. In this review, we examine current evidence that links cellular senescence and organismal aging.

  20. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  1. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  2. Solution path for manifold regularized semisupervised classification.

    Science.gov (United States)

    Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H

    2012-04-01

    Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.

  3. (2+1-dimensional regular black holes with nonlinear electrodynamics sources

    Directory of Open Access Journals (Sweden)

    Yun He

    2017-11-01

    Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.

  4. Zeno's paradox in quantum cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Groessing, G [Atominst. der Oesterreichischen Universitaeten, Vienna (Austria); Zeilinger, A [Inst. fuer Experimentalphysik, Univ. Innsbruck (Austria)

    1991-07-01

    The effect of Zeno's paradox in quantum theory is demonstrated with the aid of quantum mechanical cellular automata. It is shown that the degree of non-unitarity of the cellular automaton evolution and the frequency of consecutive measurements of cellular automaton states are operationally indistinguishable. (orig.).

  5. Zeno's paradox in quantum cellular automata

    International Nuclear Information System (INIS)

    Groessing, G.; Zeilinger, A.

    1991-01-01

    The effect of Zeno's paradox in quantum theory is demonstrated with the aid of quantum mechanical cellular automata. It is shown that the degree of non-unitarity of the cellular automaton evolution and the frequency of consecutive measurements of cellular automaton states are operationally indistinguishable. (orig.)

  6. A Review of Anti- Podal Vivaldi Antenna Operating in Cellular Mobile Communications

    Directory of Open Access Journals (Sweden)

    Asim Alkhaibari

    2017-12-01

    Full Text Available The antenna proposed is a new geomantic structure of Ultra-Wideband (UWB Anti- Podal Vivaldi antenna (AVA. It remarkably offers an attractive performance over the bands of cellular networks. However, its benefits are not limited only in particular applications, whereas radar imaging, mining detection, the biomedical science in the heating of brain cancer tumor and treatment, and the wireless communication are considered as the main applications suitable for utilization. Therefore, the focus on this paper is to spot the light illuminating into the cellular communications network Systems. On the other hands, several characteristics of Vivaldi antenna have been provided such as the gain, return loss, Voltage Standing Wave Ratio (VSWR, current distribution and E- fields. Finally, the results illustrate the capability and feasibility of the designed antenna.

  7. Modeling and Optimization of Inventory-Distribution Routing Problem for Agriculture Products Supply Chain

    Directory of Open Access Journals (Sweden)

    Li Liao

    2013-01-01

    Full Text Available Mathematical models of inventory-distribution routing problem for two-echelon agriculture products distribution network are established, which are based on two management modes, franchise chain and regular chain, one-to-many, interval periodic order, demand depending on inventory, deteriorating treatment cost of agriculture products, start-up costs of vehicles and so forth. Then, a heuristic adaptive genetic algorithm is presented for the model of franchise chain. For the regular chain model, a two-layer genetic algorithm based on oddment modification is proposed, in which the upper layer is to determine the distribution period and quantity and the lower layer is to seek the optimal order cycle, quantity, distribution routes, and the rational oddment modification number for the distributor. By simulation experiments, the validity of the algorithms is demonstrated, and the two management modes are compared.

  8. Sensing Phosphatidylserine in Cellular Membranes

    Directory of Open Access Journals (Sweden)

    Jason G. Kay

    2011-01-01

    Full Text Available Phosphatidylserine, a phospholipid with a negatively charged head-group, is an important constituent of eukaryotic cellular membranes. On the plasma membrane, rather than being evenly distributed, phosphatidylserine is found preferentially in the inner leaflet. Disruption of this asymmetry, leading to the appearance of phosphatidylserine on the surface of the cell, is known to play a central role in both apoptosis and blood clotting. Despite its importance, comparatively little is known about phosphatidylserine in cells: its precise subcellular localization, transmembrane topology and intracellular dynamics are poorly characterized. The recent development of new, genetically-encoded probes able to detect phosphatidylserine within live cells, however, is leading to a more in-depth understanding of the biology of this phospholipid. This review aims to give an overview of the current methods for phosphatidylserine detection within cells, and some of the recent realizations derived from their use.

  9. Simulation of earthquakes with cellular automata

    Directory of Open Access Journals (Sweden)

    P. G. Akishin

    1998-01-01

    Full Text Available The relation between cellular automata (CA models of earthquakes and the Burridge–Knopoff (BK model is studied. It is shown that the CA proposed by P. Bak and C. Tang,although they have rather realistic power spectra, do not correspond to the BK model. We present a modification of the CA which establishes the correspondence with the BK model.An analytical method of studying the evolution of the BK-like CA is proposed. By this method a functional quadratic in stress release, which can be regarded as an analog of the event energy, is constructed. The distribution of seismic events with respect to this “energy” shows rather realistic behavior, even in two dimensions. Special attention is paid to two-dimensional automata; the physical restrictions on compression and shear stiffnesses are imposed.

  10. Regularity of the Maxwell equations in heterogeneous media and Lipschitz domains

    KAUST Repository

    Bonito, Andrea

    2013-12-01

    This note establishes regularity estimates for the solution of the Maxwell equations in Lipschitz domains with non-smooth coefficients and minimal regularity assumptions. The argumentation relies on elliptic regularity estimates for the Poisson problem with non-smooth coefficients. © 2013 Elsevier Ltd.

  11. Validation of self-reported cellular phone use

    DEFF Research Database (Denmark)

    Samkange-Zeeb, Florence; Berg, Gabriele; Blettner, Maria

    2004-01-01

    BACKGROUND: In recent years, concern has been raised over possible adverse health effects of cellular telephone use. In epidemiological studies of cancer risk associated with the use of cellular telephones, the validity of self-reported cellular phone use has been problematic. Up to now there is ......BACKGROUND: In recent years, concern has been raised over possible adverse health effects of cellular telephone use. In epidemiological studies of cancer risk associated with the use of cellular telephones, the validity of self-reported cellular phone use has been problematic. Up to now...... there is very little information published on this subject. METHODS: We conducted a study to validate the questionnaire used in an ongoing international case-control study on cellular phone use, the "Interphone study". Self-reported cellular phone use from 68 of 104 participants who took part in our study...... was compared with information derived from the network providers over a period of 3 months (taken as the gold standard). RESULTS: Using Spearman's rank correlation, the correlation between self-reported phone use and information from the network providers for cellular phone use in terms of the number of calls...

  12. Regularized forecasting of chaotic dynamical systems

    International Nuclear Information System (INIS)

    Bollt, Erik M.

    2017-01-01

    While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.

  13. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  14. Regularities of the extraction of rare earth elements with triisoamyl phosphate

    International Nuclear Information System (INIS)

    Danilov, N.A.; Korpusov, G.V.; Utkina, O.V.; Pogorel'skaya, S.A.

    1988-01-01

    A study was made on practically important regularities of rare earth element (REE) extraction by triisoamyl phosphate (TiAP): isotherms of REE extraction, effect of REE and salting-out agents concentrations in aqueous phase on REE distribution and separation coefficients, effect of HNO 3 concentration and others. The data obtained show, that TiAP is the typical representative of neutral organophosphoric compounds, and its extraction properties are close to those of TBP. The third phase doesn't form during REE nitrate extraction by TiAP solutions in saturated hydrocarbons of any concentration. High selectivity is not observed during separation of cerium subgroup REE by TiAP. TiAP losses are lower than those of TBP due to lower TiAP solubility in water

  15. Arithmetic properties of $\\ell$-regular overpartition pairs

    OpenAIRE

    NAIKA, MEGADAHALLI SIDDA MAHADEVA; SHIVASHANKAR, CHANDRAPPA

    2017-01-01

    In this paper, we investigate the arithmetic properties of $\\ell$-regular overpartition pairs. Let $\\overline{B}_{\\ell}(n)$ denote the number of $\\ell$-regular overpartition pairs of $n$. We will prove the number of Ramanujan-like congruences and infinite families of congruences modulo 3, 8, 16, 36, 48, 96 for $\\overline{B}_3(n)$ and modulo 3, 16, 64, 96 for $\\overline{B}_4(n)$. For example, we find that for all nonnegative integers $\\alpha$ and $n$, $\\overline{B}_{3}(3^{\\alpha}(3n+2))\\equiv ...

  16. 47 CFR 90.672 - Unacceptable interference to non-cellular 800 MHz licensees from 800 MHz cellular systems or part...

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Unacceptable interference to non-cellular 800 MHz licensees from 800 MHz cellular systems or part 22 Cellular Radiotelephone systems, and within the... Procedures and Process-Unacceptable Interference § 90.672 Unacceptable interference to non-cellular 800 MHz...

  17. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  18. Regularization Tools Version 3.0 for Matlab 5.2

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    1999-01-01

    This communication describes Version 3.0 of Regularization Tools, a Matlab package for analysis and solution of discrete ill-posed problems.......This communication describes Version 3.0 of Regularization Tools, a Matlab package for analysis and solution of discrete ill-posed problems....

  19. Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks

    International Nuclear Information System (INIS)

    Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang

    2010-01-01

    We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)

  20. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    Science.gov (United States)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules