DEFF Research Database (Denmark)
Ebersbach, Gitte; Ringgaard, Simon; Møller-Jensen, Jakob
2006-01-01
with each other in a bacterial two-hybrid assay but do not interact with FtsZ, eight other essential cell division proteins or MreB actin. Based on these observations, we propose a simple model for how oscillating ParA filaments can mediate regular cellular distribution of plasmids. The model functions...
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Cellular Automata Simulation for Wealth Distribution
Lo, Shih-Ching
2009-08-01
Wealth distribution of a country is a complicate system. A model, which is based on the Epstein & Axtell's "Sugars cape" model, is presented in Netlogo. The model considers the income, age, working opportunity and salary as control variables. There are still other variables should be considered while an artificial society is established. In this study, a more complicate cellular automata model for wealth distribution model is proposed. The effects of social welfare, tax, economical investment and inheritance are considered and simulated. According to the cellular automata simulation for wealth distribution, we will have a deep insight of financial policy of the government.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi
2014-01-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both
Reduction of Nambu-Poisson Manifolds by Regular Distributions
Das, Apurba
2018-03-01
The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.
Regularized κ-distributions with non-diverging moments
Scherer, K.; Fichtner, H.; Lazar, M.
2017-12-01
For various plasma applications the so-called (non-relativistic) κ-distribution is widely used to reproduce and interpret the suprathermal particle populations exhibiting a power-law distribution in velocity or energy. Despite its reputation the standard κ-distribution as a concept is still disputable, mainly due to the velocity moments M l which make a macroscopic characterization possible, but whose existence is restricted only to low orders l definition of the κ-distribution itself is conditioned by the existence of the moment of order l = 2 (i.e., kinetic temperature) satisfied only for κ > 3/2 . In order to resolve these critical limitations we introduce the regularized κ-distribution with non-diverging moments. For the evaluation of all velocity moments a general analytical expression is provided enabling a significant step towards a macroscopic (fluid-like) description of space plasmas, and, in general, any system of κ-distributed particles.
Regular distributive efficiency and the distributive liberal social contract.
Jean Mercier Ythier
2009-01-01
We consider abstract social systems of private property, made of n individuals endowed with non-paternalistic interdependent preferences, who interact through exchanges on competitive markets and Pareto-efficient lumpsum transfers. The transfers follow from a distributive liberal social contract defined as a redistribution of initial endowments such that the resulting market equilibrium allocation is, both, Pareto-efficient relative to individual interdependent preferences, and unanimously we...
Compression behavior of cellular metals with inhomogeneous mass distribution
International Nuclear Information System (INIS)
Foroughi, B.
2001-05-01
Mechanical behavior of two types of closed cell metals (ALULIGHT and ALPORAS) is investigated experimentally and numerically. Compressive tests performed on prismatic specimens indicate that inhomogeneities in the mass density distribution are a key factor in the deformation behavior of cellular metals. The three dimensional cellular structure of the investigated specimens is recorded using x-ray medical computed tomography (CT). A special procedure called density mapping method has been used to transfer the recorded CT data into a continuum by averaging over a certain domain (averaging domain). This continuum model is implemented using finite elements to study the effect of variations in local mass densities. The finite element model is performed by a simple regular discretization of a specimen's volume with elements which have constant edge length. Mechanical properties derived from compression tests of ALPORAS samples are assigned to the corresponding mesoscopic density value of each element. The effect of averaging domain size is studied to obtain a suitable dimension which fulfils the homogenization requirements and allows the evaluation of inhomogenities in the specimens. The formation and propagation of deformation band(s) and stress-strain responses of tested cellular metals are modeled with respect to their mass distribution. It is shown that the inhomogeneous density distribution leads to plastic strain localization and causes a monotonically increase of the stress in the plateau regime although no hardening response was considered for homogeneous material in this regime. The simulated plastic strain localization and the calculated stress-strain responses are compared with the experimental results. The stiffness values of experiment and simulation agree very well for both cellular materials. The values of plateau strength as well, but it differs in some cases of ALULIGHT samples, where the hardening response can be predicted at least qualitatively
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu
2014-06-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.
Spine labeling in MRI via regularized distribution matching.
Hojjat, Seyed-Parsa; Ayed, Ismail; Garvin, Gregory J; Punithakumar, Kumaradevan
2017-11-01
This study investigates an efficient (nearly real-time) two-stage spine labeling algorithm that removes the need for an external training while being applicable to different types of MRI data and acquisition protocols. Based solely on the image being labeled (i.e., we do not use training data), the first stage aims at detecting potential vertebra candidates following the optimization of a functional containing two terms: (i) a distribution-matching term that encodes contextual information about the vertebrae via a density model learned from a very simple user input, which amounts to a point (mouse click) on a predefined vertebra; and (ii) a regularization constraint, which penalizes isolated candidates in the solution. The second stage removes false positives and identifies all vertebrae and discs by optimizing a geometric constraint, which embeds generic anatomical information on the interconnections between neighboring structures. Based on generic knowledge, our geometric constraint does not require external training. We performed quantitative evaluations of the algorithm over a data set of 90 mid-sagittal MRI images of the lumbar spine acquired from 45 different subjects. To assess the flexibility of the algorithm, we used both T1- and T2-weighted images for each subject. A total of 990 structures were automatically detected/labeled and compared to ground-truth annotations by an expert. On the T2-weighted data, we obtained an accuracy of 91.6% for the vertebrae and 89.2% for the discs. On the T1-weighted data, we obtained an accuracy of 90.7% for the vertebrae and 88.1% for the discs. Our algorithm removes the need for external training while being applicable to different types of MRI data and acquisition protocols. Based on the current testing data, a subject-specific model density and generic anatomical information, our method can achieve competitive performances when applied to T1- and T2-weighted MRI images.
International Nuclear Information System (INIS)
Olson, Gordon L.
2008-01-01
In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution
Energy Technology Data Exchange (ETDEWEB)
Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net
2008-11-15
In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.
International Nuclear Information System (INIS)
Motozawa, Masaaki; Ito, Takahiro; Iwamoto, Kaoru; Kawashima, Hideki; Ando, Hirotomo; Senda, Tetsuya; Tsuji, Yoshiyuki; Kawaguchi, Yasuo
2013-01-01
Highlights: • Flow over the regularly distributed triangular ribs was investigated. • Simultaneous measurement of flow resistance and velocity profile was performed. • Flow resistance was measured directly and velocity profile was measured by LDV. • Flow resistance was estimated by the information of the velocity field. • Estimated flow resistance has good agreement with the measured flow resistance. -- Abstract: The relationship between the flow resistance of a turbulent flow over triangular ribs regularly distributed on a wall surface and the velocity distribution around the ribs was investigated experimentally. A concentric cylinder device composed of an inner test cylinder and an outer cylinder was employed to measure the flow resistance using the torque of the shaft of the inner cylinder and the velocity distribution of the flow around a rib by laser Doppler velocimetry (LDV) simultaneously. We prepared four inner test cylinders having 4, 8, 12 and 16 triangular ribs on the surface with the same interval between them. Each rib had an isosceles right triangle V-shape and a height of 2 mm. To investigate the relationship between flow resistance and velocity distribution, we estimated the frictional drag and pressure drag acting on the surface of the ribs separately using the velocity distribution. Therefore, we could also estimate the total flow resistance using the velocity distribution. As a result of the experiment, the flow resistance and the attachment point downstream of the rib were shown to depend on the distance between ribs. Moreover, the flow resistance estimated using the velocity distribution had good agreement with the flow resistance measured using the torque of the inner cylinder
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Distributed Velocity-Dependent Protocol for Multihop Cellular Sensor Networks
Directory of Open Access Journals (Sweden)
Deepthi Chander
2009-01-01
Full Text Available Cell phones are embedded with sensors form a Cellular Sensor Network which can be used to localize a moving event. The inherent mobility of the application and of the cell phone users warrants distributed structure-free data aggregation and on-the-fly routing. We propose a Distributed Velocity-Dependent (DVD protocol to localize a moving event using a Multihop Cellular Sensor Network (MCSN. DVD is based on a novel form of connectivity determined by the waiting time of nodes for a Random Waypoint (RWP distribution of cell phone users. This paper analyzes the time-stationary and spatial distribution of the proposed waiting time to explain the superior event localization and delay performances of DVD over the existing Randomized Waiting (RW protocol. A sensitivity analysis is also performed to compare the performance of DVD with RW and the existing Centralized approach.
Distributed Velocity-Dependent Protocol for Multihop Cellular Sensor Networks
Directory of Open Access Journals (Sweden)
Jagyasi Bhushan
2009-01-01
Full Text Available Abstract Cell phones are embedded with sensors form a Cellular Sensor Network which can be used to localize a moving event. The inherent mobility of the application and of the cell phone users warrants distributed structure-free data aggregation and on-the-fly routing. We propose a Distributed Velocity-Dependent (DVD protocol to localize a moving event using a Multihop Cellular Sensor Network (MCSN. DVD is based on a novel form of connectivity determined by the waiting time of nodes for a Random Waypoint (RWP distribution of cell phone users. This paper analyzes the time-stationary and spatial distribution of the proposed waiting time to explain the superior event localization and delay performances of DVD over the existing Randomized Waiting (RW protocol. A sensitivity analysis is also performed to compare the performance of DVD with RW and the existing Centralized approach.
Regularities of the vertical distribution of uranium-molybdenum mineralization
International Nuclear Information System (INIS)
Konstantinov, V.M.; Kazantsev, V.V.; Protasov, V.N.
1980-01-01
The geological structure of one of ore fields of the uranium-molybdenum formation pertaining to the northern framing of a large volcano-tectonic depression is studied. The main uranium deposits are related to necks formed by neck facies of brown liparites. Three zones are singled out within the limits of the ore field. In the upper one there are small ore bodies with a low uranium content represented by phenolite-chlorite, pitchblende 3-coffinite 3-jordizite and calcinite-sulphide associations, in the middle one - the main ore bodies formed by pitchblende 1-chlorite, molybdenite 2 (jordizite)-pitchblende 2-hydromica, coffinite 2-pyrite associations; in the lower one-thin veinlets formed by coffinite-molybdenite 1-chlorite, brannerite-pyrite and pitchblende 1-chlorite associations. Dimensions of the ore deposits depend on the neck sizes: in small necks the middle zone and, rarely, the lower one are of the industrial interest; in the large ones - the upper middle and, probably, lower ones. The regularities found can be extended to other deposits of the uranium-molybdenum formation [ru
Sub-cellular distribution and translocation of TRP channels.
Toro, Carlos A; Arias, Luis A; Brauchi, Sebastian
2011-01-01
Cellular electrical activity is the result of a highly complex processes that involve the activation of ion channel proteins. Ion channels make pores on cell membranes that rapidly transit between conductive and non-conductive states, allowing different ions to flow down their electrochemical gradients across cell membranes. In the case of neuronal cells, ion channel activity orchestrates action potentials traveling through axons, enabling electrical communication between cells in distant parts of the body. Somatic sensation -our ability to feel touch, temperature and noxious stimuli- require ion channels able to sense and respond to our peripheral environment. Sensory integration involves the summing of various environmental cues and their conversion into electrical signals. Members of the Transient Receptor Potential (TRP) family of ion channels have emerged as important mediators of both cellular sensing and sensory integration. The regulation of the spatial and temporal distribution of membrane receptors is recognized as an important mechanism for controlling the magnitude of the cellular response and the time scale on which cellular signaling occurs. Several studies have shown that this mechanism is also used by TRP channels to modulate cellular response and ultimately fulfill their physiological function as sensors. However, the inner-working of this mode of control for TRP channels remains poorly understood. The question of whether TRPs intrinsically regulate their own vesicular trafficking or weather the dynamic regulation of TRP channel residence on the cell surface is caused by extrinsic changes in the rates of vesicle insertion or retrieval remain open. This review will examine the evidence that sub-cellular redistribution of TRP channels plays an important role in regulating their activity and explore the mechanisms that control the trafficking of vesicles containing TRP channels.
Directory of Open Access Journals (Sweden)
Yangzexi Liu
2017-01-01
Full Text Available The technology of autonomous vehicles is expected to revolutionize the operation of road transport systems. The penetration rate of autonomous vehicles will be low at the early stage of their deployment. It is a challenge to explore the effects of autonomous vehicles and their penetration on heterogeneous traffic flow dynamics. This paper aims to investigate this issue. An improved cellular automaton was employed as the modeling platform for our study. In particular, two sets of rules for lane changing were designed to address mild and aggressive lane changing behavior. With extensive simulation studies, we obtained some promising results. First, the introduction of autonomous vehicles to road traffic could considerably improve traffic flow, particularly the road capacity and free-flow speed. And the level of improvement increases with the penetration rate. Second, the lane-changing frequency between neighboring lanes evolves with traffic density along a fundamental-diagram-like curve. Third, the impacts of autonomous vehicles on the collective traffic flow characteristics are mainly related to their smart maneuvers in lane changing and car following, and it seems that the car-following impact is more pronounced.
Stress Distribution in Graded Cellular Materials Under Dynamic Compression
Directory of Open Access Journals (Sweden)
Peng Wang
Full Text Available Abstract Dynamic compression behaviors of density-homogeneous and density-graded irregular honeycombs are investigated using cell-based finite element models under a constant-velocity impact scenario. A method based on the cross-sectional engineering stress is developed to obtain the one-dimensional stress distribution along the loading direction in a cellular specimen. The cross-sectional engineering stress is contributed by two parts: the node-transitive stress and the contact-induced stress, which are caused by the nodal force and the contact of cell walls, respectively. It is found that the contact-induced stress is dominant for the significantly enhanced stress behind the shock front. The stress enhancement and the compaction wave propagation can be observed through the stress distributions in honeycombs under high-velocity compression. The single and double compaction wave modes are observed directly from the stress distributions. Theoretical analysis of the compaction wave propagation in the density-graded honeycombs based on the R-PH (rigid-plastic hardening idealization is carried out and verified by the numerical simulations. It is found that stress distribution in cellular materials and the compaction wave propagation characteristics under dynamic compression can be approximately predicted by the R-PH shock model.
The Influence of nonuniform activity distribution on cellular dosimetry
International Nuclear Information System (INIS)
Naling, Song; Yuan, Tian; Liangan, Zhang; Guangfu, Dai
2008-01-01
S value is an important parameter in determination of absorbed dose in nuclear medicine and radiobiology. The distribution of radioactivity shows significant influence on the S value especially in microdosimetry. In present work, a semi Monte Carlo Model is developed to calculate the microdosimetric cellular S value for different micro-distributions of radioactivity, i.e. uniform, linear increase, linear decrease, exponential increase, exponential decrease and centroid distribution. Emission of alpha particles is simulated by Monte Carlo model and the energy imparted to the target volume is calculated by the analytical Continuous Slowing Down Approximation (CSDA) model and the spline interpolation of range-energy relationship. We calculate tables of S values for 213 Po and 210 Po with various dimensions and most important with various possible micro-distributions of radioactivity, such as linear increase, linear decrease, exponential increase and exponential decrease. Then we compare the S values from cell to cell of uniform distribution with the Hamacher's results to test the feasibility of our model. S values of some nonuniform micro-distributions are compared to the corresponding data of the uniform distribution. The possible sources of these differences are theoretical analyzed. (author)
Regularization and asymptotic expansion of certain distributions defined by divergent series
Directory of Open Access Journals (Sweden)
Ricardo Estrada
1995-01-01
Full Text Available The regularization of the distribution ∑n=−∞∞δ(x−pn. which gives a regularized value to the divergent series ∑n=−∞∞φ(pn is obtained in several spaces of test functions. The asymptotic expansion as ϵ→0+of series of the type ∑n=0∞φ(ϵ pn is also obtained.
Cellular- and micro-dosimetry of heterogeneously distributed tritium.
Chao, Tsi-Chian; Wang, Chun-Ching; Li, Junli; Li, Chunyan; Tung, Chuan-Jong
2012-01-01
The assessment of radiotoxicity for heterogeneously distributed tritium should be based on the subcellular dose and relative biological effectiveness (RBE) for cell nucleus. In the present work, geometry-dependent absorbed dose and RBE were calculated using Monte Carlo codes for tritium in the cell, cell surface, cytoplasm, or cell nucleus. Penelope (PENetration and Energy LOss of Positrins and Electrons) code was used to calculate the geometry-dependent absorbed dose, lineal energy, and electron fluence spectrum. RBE for the intestinal crypt regeneration was calculated using a lineal energy-dependent biological weighting function. RBE for the induction of DNA double strand breaks was estimated using a nucleotide-level map for clustered DNA lesions of the Monte Carlo damage simulation (MCDS) code. For a typical cell of 10 μm radius and 5 μm nuclear radius, tritium in the cell nucleus resulted in much higher RBE-weighted absorbed dose than tritium distributed uniformly. Conversely, tritium distributed on the cell surface led to trivial RBE-weighted absorbed dose due to irradiation geometry and great attenuation of beta particles in the cytoplasm. For tritium uniformly distributed in the cell, the RBE-weighted absorbed dose was larger compared to tritium uniformly distributed in the tissue. Cellular- and micro-dosimetry models were developed for the assessment of heterogeneously distributed tritium.
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Rackauskas, Alfredas
2010-01-01
In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...
Cellular and subcellular distribution of BSH in human glioblastoma multiforme
International Nuclear Information System (INIS)
Neumann, M.; Gabel, D.
2000-01-01
The cellular and subcellular distribution of mercaptoundecahydrododecaborate (BSH) in seven glioblastoma multiforme tissue sections of six patients having received BSH prior to surgery was investigated by light, fluorescence and electron microscopy. With use of specific antibodies against BSH its localization could be found in tissue sections predominantly (approx. 90%) in the cytoplasm of GFAP-positive cells of all but one patient. The latter was significantly younger (33 years in contrast of 46-71 (mean 60) years). In none of the tissue sections BSH could be found to a significant amount in the cell nuclei. In contrast, electron microscopy studies show BSH as well associated with the cell membrane as with the chromatin in the nucleus. (author)
International Nuclear Information System (INIS)
Kang Zili.
1989-01-01
Based on summing up Guangxi geotectonic features and evolutionary regularities, this paper discusses the occurrence features, formation conditions and time-space distribution regularities of various U-rich strata during the development of geosyncline, platform and diwa stages, Especially, during diwa stage all those U-rich strata might be reworked to a certain degree and resulted in the mobilization of uranium, then enriching to form polygenetic composite uranium ore deposits with stratabound features. This study will be helpful for prospecting in the region
Quasi-regular impurity distribution driven by charge-density wave
International Nuclear Information System (INIS)
Baldea, I.; Badescu, M.
1991-09-01
The displacive motion of the impurity distribution immersed into the one-dimensional system has recently been studied in detail as one kind of quasi-regularity driven by CDW. As a further investigation of this problem we develop here a microscopical model for a different kind of quasi-regular impurity distribution driven by CDW, consisting of the modulation in the probability of occupied sites. The dependence on impurity concentration and temperature of relevant CDW quantities is obtained. Data reported in the quasi-1D materials NbSe 3 and Ta 2 NiSe 7 (particularly, thermal hysteresis effects at CDW transition) are interpreted in the framework of the present model. Possible similarities to other physical systems are also suggested. (author). 38 refs, 7 figs
Some regularity of the grain size distribution in nuclear fuel with controllable structure
International Nuclear Information System (INIS)
Loktev, Igor
2008-01-01
It is known, the fission gas release from ceramic nuclear fuel depends from average size of grains. To increase grain size they use additives which activate sintering of pellets. However, grain size distribution influences on fission gas release also. Fuel with different structures, but with the same average size of grains has different fission gas release. Other structure elements, which influence operational behavior of fuel, are pores and inclusions. Earlier, in Kyoto, questions of distribution of grain size for fuel with 'natural' structure were discussed. Some regularity of grain size distribution of fuel with controllable structure and high average size of grains are considered in the report. Influence of inclusions and pores on an error of the automated definition of parameters of structure is shown. The criterion, which describe of behavior of fuel with specific grain size distribution, is offered
Research on stress distribution regularity of cement sheaths of radial well based on ABAQUS
Shi, Jihui; Cheng, Yuanfang; Li, Xiaolong; Xiao, Wen; Li, Menglai
2017-12-01
To ensure desirable outcome of hydraulic fracturing based on ultra-short radius radial systems, it is required to investigate the stress distribution regularity and stability of the cement sheath. On the basis of the theoretical model of the cement sheath stress distribution, a reservoir mechanical model was built using the finite element software, ABAQUS, according to the physical property of a certain oil reservoir of the Shengli oilfield. The stress distribution of the casing-cement-sheath-formation system under the practical condition was simulated, based on which analyses were conducted from multiple points of view. Results show that the stress on the internal interface of the cement sheath exceeds that on the external interface, and fluctuates with higher amplitudes, which means that the internal interface is the most failure-prone. The unevenness of the cement sheath stress distribution grows with the increasing horizontal principal stress ratio, and so does the variation magnitude. This indicates that higher horizontal principal stress ratios are unfavourable for the structural stability of the cement sheath. Both the wellbore quantity of the URRS and the physical property of the material can affect the cement sheath distribution. It is suggested to optimize the quantity of the radial wellbore and use cement with a lower elastic modulus and higher Poisson’s ratio. At last, the impact level of the above factor was analysed, with the help of the grey correlation analysis.
International Nuclear Information System (INIS)
Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.
1983-06-01
An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted
Ahn, Sungsook; Seo, Eunseok; Kim, Ki Hean; Lee, Sang Joon
2015-06-01
Nanoparticles have been developed in broad biomedical research in terms of effective cellular interactions to treat and visualize diseased cells. Considering the charge and polar functional groups of proteins that are embedded in cellular membranes, charged nanoparticles have been strategically developed to enhance electrostatic cellular interactions. In this study, we show that cellular uptake efficiency, pathway, and spatial distribution of gold nanoparticles in a cell are significantly modulated based on the surface condition of gold nanoparticles and human cancer cells that were tuned by controlling the pH of the medium and by introducing an electron beam. Cellular uptake efficiency is increased when electrostatic attraction is induced between the cells and the gold nanoparticles. Cell surface modification changes the cellular uptake pathways of the gold nanoparticles and concentrates the gold nanoparticles at the membrane region. Surface modification of the gold nanoparticles also contributes to deep penetration and homogeneous spatial distributions in a cell.
Directory of Open Access Journals (Sweden)
M. P. Sulzer
2004-01-01
Full Text Available We report the observation and analysis of ionization flashes associated with the decay of meteoroids (so-called head echos detected by the Arecibo 430 MHz radar during regular ionospheric observations in the spring and autumn equinoxes. These two periods allow pointing well-above and nearly-into the ecliptic plane at dawn when the event rate maximizes. The observation of many thousands of events allows a statistical interpretation of the results, which show that there is a strong tendency for the observed meteoroids to come from the apex as has been previously reported (Chau and Woodman, 2004. The velocity distributions agree with Janches et al. (2003a when they are directly comparable, but the azimuth scan used in these observations allows a new perspective. We have constructed a simple statistical model which takes meteor velocities as input and gives radar line of sight velocities as output. The intent is to explain the fastest part of the velocity distribution. Since the speeds interpreted from the measurements are distributed fairly narrowly about nearly 60 km s-1, double the speed of the earth in its orbit, is consistent with the interpretation that many of the meteoroids seen by the Arecibo radar are moving in orbits about the sun with similar parameters as the earth, but in the retrograde direction. However, it is the directional information obtained from the beam-swinging radar experiment and the speed that together provide the evidence for this interpretation. Some aspects of the measured velocity distributions suggest that this is not a complete description even for the fast part of the distribution, and it certainly says nothing about the slow part first described in Janches et al. (2003a. Furthermore, we cannot conclude anything about the entire dust population since there are probably selection effects that restrict the observations to a subset of the population.
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....
On the Meta Distribution of Coverage Probability in Uplink Cellular Networks
Elsawy, Hesham; Alouini, Mohamed-Slim
2017-01-01
This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP
Fu, Qiang; Su, Zhixin; Cheng, Yuqiang; Wang, Zhaofei; Li, Shiyu; Wang, Heng'an; Sun, Jianhe; Yan, Yaxian
In order to investigate the diverse characteristics of clustered, regularly interspaced short palindromic repeat (CRISPR) arrays and the distribution of virulence factor genes in avian Escherichia coli, 80 E. coli isolates obtained from chickens with avian pathogenic E. coli (APEC) or avian fecal commensal E. coli (AFEC) were identified. Using the multiplex polymerase chain reaction (PCR), five genes were subjected to phylogenetic typing and examined for CRISPR arrays to study genetic relatedness among the strains. The strains were further analyzed for CRISPR loci and virulence factor genes to determine a possible association between their CRISPR elements and their potential virulence. The strains were divided into five phylogenetic groups: A, B1, B2, D and E. It was confirmed that two types of CRISPR arrays, CRISPR1 and CRISPR2, which contain up to 246 distinct spacers, were amplified in most of the strains. Further classification of the isolates was achieved by sorting them into nine CRISPR clusters based on their spacer profiles, which indicates a candidate typing method for E. coli. Several significant differences in invasion-associated gene distribution were found between the APEC isolates and the AFEC isolates. Our results identified the distribution of 11 virulence genes and CRISPR diversity in 80 strains. It was demonstrated that, with the exception of iucD and aslA, there was no sharp demarcation in the gene distribution between the pathogenic (APEC) and commensal (AFEC) strains, while the total number of indicated CRISPR spacers may have a positive correlation with the potential pathogenicity of the E. coli isolates. Copyright © 2016. Published by Elsevier Masson SAS.
Variations and Regularities in the Hemispheric Distributions in Sunspot Groups of Various Classes
Gao, Peng-Xin
2018-05-01
The present study investigates the variations and regularities in the distributions in sunspot groups (SGs) of various classes in the northern and southern hemispheres from Solar Cycles (SCs) 12 to 23. Here, we use the separation scheme that was introduced by Gao, Li, and Li ( Solar Phys. 292, 124, 2017), which is based on A/U ( A is the corrected area of the SG, and U is the corrected umbral area of the SG), in order to separate SGs into simple SGs (A/U ≤ 4.5) and complex SGs (A/U > 6.2). The time series of Greenwich photoheliographic results from 1875 to 1976 (corresponding to complete SCs 12 - 20) and Debrecen photoheliographic data during the period 1974 - 2015 (corresponding to complete SCs 21 - 23) are used to show the distributions of simple and complex SGs in the northern and southern hemispheres. The main results we obtain are reported as follows: i) the larger of the maximum annual simple SG numbers in the two hemispheres and the larger of the maximum annual complex SG numbers in the two hemispheres occur in different hemispheres during SCs 12, 14, 18, and 19; ii) the relative changing trends of two curves - cumulative SG numbers in the northern and southern hemispheres - for simple SGs are different from those for complex SGs during SCs 12, 14, 18, and 21; and iii) there are discrepancies between the dominant hemispheres of simple and complex SGs for SCs 12, 14, 18, and 21.
Distribution of cellular HSV-1 receptor expression in human brain.
Lathe, Richard; Haas, Juergen G
2017-06-01
Herpes simplex virus type 1 (HSV-1) is a neurotropic virus linked to a range of acute and chronic neurological disorders affecting distinct regions of the brain. Unusually, HSV-1 entry into cells requires the interaction of viral proteins glycoprotein D (gD) and glycoprotein B (gB) with distinct cellular receptor proteins. Several different gD and gB receptors have been identified, including TNFRSF14/HVEM and PVRL1/nectin 1 as gD receptors and PILRA, MAG, and MYH9 as gB receptors. We investigated the expression of these receptor molecules in different areas of the adult and developing human brain using online transcriptome databases. Whereas all HSV-1 receptors showed distinct expression patterns in different brain areas, the Allan Brain Atlas (ABA) reported increased expression of both gD and gB receptors in the hippocampus. Specifically, for PVRL1, TNFRFS14, and MYH9, the differential z scores for hippocampal expression, a measure of relative levels of increased expression, rose to 2.9, 2.9, and 2.5, respectively, comparable to the z score for the archetypical hippocampus-enriched mineralocorticoid receptor (NR3C2, z = 3.1). These data were confirmed at the Human Brain Transcriptome (HBT) database, but HBT data indicate that MAG expression is also enriched in hippocampus. The HBT database allowed the developmental pattern of expression to be investigated; we report that all HSV1 receptors markedly increase in expression levels between gestation and the postnatal/adult periods. These results suggest that differential receptor expression levels of several HSV-1 gD and gB receptors in the adult hippocampus are likely to underlie the susceptibility of this brain region to HSV-1 infection.
Energy Distribution of a Regular Black Hole Solution in Einstein-Nonlinear Electrodynamics
Directory of Open Access Journals (Sweden)
I. Radinschi
2015-01-01
Full Text Available A study about the energy momentum of a new four-dimensional spherically symmetric, static and charged, regular black hole solution developed in the context of general relativity coupled to nonlinear electrodynamics is presented. Asymptotically, this new black hole solution behaves as the Reissner-Nordström solution only for the particular value μ=4, where μ is a positive integer parameter appearing in the mass function of the solution. The calculations are performed by use of the Einstein, Landau-Lifshitz, Weinberg, and Møller energy momentum complexes. In all the aforementioned prescriptions, the expressions for the energy of the gravitating system considered depend on the mass M of the black hole, its charge q, a positive integer α, and the radial coordinate r. In all these pseudotensorial prescriptions, the momenta are found to vanish, while the Landau-Lifshitz and Weinberg prescriptions give the same result for the energy distribution. In addition, the limiting behavior of the energy for the cases r→∞, r→0, and q=0 is studied. The special case μ=4 and α=3 is also examined. We conclude that the Einstein and Møller energy momentum complexes can be considered as the most reliable tools for the study of the energy momentum localization of a gravitating system.
Some regularities in the distribution of kenophytes in the Polish Carpathians and their foreland
Directory of Open Access Journals (Sweden)
Zając Maria
2015-03-01
Full Text Available The Polish Carpathians and their northern foreland are a rewarding object for the kenophyte distribution research. The study, using the cartogram method, showed that the number of kenophyte species decreases with increasing altitude. Only few kenophytes were found in the lower forest zone. This regularity concerns also the species that reach higher altitudes in the mountains of their native lands. A number of species migrated into the Carpathians through rivers and streams. River valleys generate many open habitats, which are easily colonized by kenophytes due to the lack of competition. In the Carpathians, towns used to be founded in the mountain valleys and this was also a favouring factor of kenophyte propagation. The arrangement of mountain ranges in the Polish Carpathians, including their foreland, hindered the migration of some species and allowed to discover the possible migration routes into the area covered by research. Tracing these migration routes was possible only for those species that have not occupied the whole available area yet. Additionally, the study indicated the most dangerous invasive species in the Polish Carpathians and their foreland.
Cellular complexity in subcortical white matter: a distributed control circuit?
Colombo, Jorge A
2018-03-01
The subcortical white matter (SWM) has been traditionally considered as a site for passive-neutral-information transfer through cerebral cortex association and projection fibers. Yet, the presence of subcortical neuronal and glial "interstitial" cells expressing immunolabelled neurotransmitters/neuromodulators and synaptic vesicular proteins, and recent immunohistochemical and electrophysiological observations on the rat visual cortex as well as interactive regulation of myelinating processes support the possibility that SWM nests subcortical, regionally variable, distributed neuronal-glial circuits, that could influence information transfer. Their hypothetical involvement in regulating the timing and signal transfer probability at the SWM axonal components ought to be considered and experimentally analysed. Thus, the "interstitial" neuronal cells-associated with local glial cells-traditionally considered to be vestigial and functionally inert under normal conditions, they may well turn to be critical in regulating information transfer at the SWM.
Reali, Florencia; Griffiths, Thomas L.
2009-01-01
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this…
International Nuclear Information System (INIS)
Li Zuoan; Li Kelin
2009-01-01
In this paper, we investigate a class of impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms. By employing the delay differential inequality with impulsive initial conditions and M-matrix theory, we find some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive fuzzy cellular neural networks with distributed delays and reaction-diffusion terms. In particular, the estimate of the exponential converging index is also provided, which depends on the system parameters. An example is given to show the effectiveness of the results obtained here.
International Nuclear Information System (INIS)
Wang Yixuan; Xiong Wanmin; Zhou Qiyuan; Xiao Bing; Yu Yuehua
2006-01-01
In this Letter cellular neural networks with continuously distributed delays and impulses are considered. Sufficient conditions for the existence and global exponential stability of a unique equilibrium point are established by using the fixed point theorem and differential inequality techniques. The results of this Letter are new and they complement previously known results
Secure Real-Time Monitoring and Management of Smart Distribution Grid using Shared Cellular Networks
DEFF Research Database (Denmark)
Nielsen, Jimmy Jessen; Ganem, Hervé; Jorguseski, Ljupco
2017-01-01
capabilities. Thanks to the advanced measurement devices, management framework, and secure communication infrastructure developed in the FP7 SUNSEED project, the Distribution System Operator (DSO) now has full observability of the energy flows at the medium/low voltage grid. Furthermore, the prosumers are able......, where the smart grid ICT solutions are provided through shared cellular LTE networks....
On the distribution and mean of received power in stochastic cellular network
Cao, Fengming; Ganesh, Ayalvadi; Armour, Simon; Sooriyabandara, Mahesh
2016-01-01
This paper exploits the distribution and mean of received power for cellular network with stochastic network modeling to study the difference between the two cell association criteria, i.e. the strongest received power based cell association and the closest distance based cell association. Consequently we derive the analytical expression of the distribution and the mean of the nth strongest received power and the received power from the nth nearest base station and the derivations have been c...
Global exponential stability of mixed discrete and distributively delayed cellular neural network
International Nuclear Information System (INIS)
Yao Hong-Xing; Zhou Jia-Yan
2011-01-01
This paper concernes analysis for the global exponential stability of a class of recurrent neural networks with mixed discrete and distributed delays. It first proves the existence and uniqueness of the balance point, then by employing the Lyapunov—Krasovskii functional and Young inequality, it gives the sufficient condition of global exponential stability of cellular neural network with mixed discrete and distributed delays, in addition, the example is provided to illustrate the applicability of the result. (general)
Directory of Open Access Journals (Sweden)
Tamrazyan Ashot Georgievich
2012-10-01
Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
QoE-Driven D2D Media Services Distribution Scheme in Cellular Networks
Directory of Open Access Journals (Sweden)
Mingkai Chen
2017-01-01
Full Text Available Device-to-device (D2D communication has been widely studied to improve network performance and considered as a potential technological component for the next generation communication. Considering the diverse users’ demand, Quality of Experience (QoE is recognized as a new degree of user’s satisfaction for media service transmissions in the wireless communication. Furthermore, we aim at promoting user’s Mean of Score (MOS value to quantify and analyze user’s QoE in the dynamic cellular networks. In this paper, we explore the heterogeneous media service distribution in D2D communications underlaying cellular networks to improve the total users’ QoE. We propose a novel media service scheme based on different QoE models that jointly solve the massive media content dissemination issue for cellular networks. Moreover, we also investigate the so-called Media Service Adaptive Update Scheme (MSAUS framework to maximize users’ QoE satisfaction and we derive the popularity and priority function of different media service QoE expression. Then, we further design Media Service Resource Allocation (MSRA algorithm to schedule limited cellular networks resource, which is based on the popularity function to optimize the total users’ QoE satisfaction and avoid D2D interference. In addition, numerical simulation results indicate that the proposed scheme is more effective in cellular network content delivery, which makes it suitable for various media service propagation.
DEFF Research Database (Denmark)
Missov, Trifon I.; Schöley, Jonas
to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...
The cellular distribution of histone H5 in embryonic and adult tissues of Xenopus laevis and chicken
Moorman, A. F.; de Boer, P. A.; Lamers, W. H.; Charles, R.
1986-01-01
The cellular distribution of histone H5 in embryonic and adult tissues of Xenopus laevis and chicken has been established with monoclonal antibodies to histone H5. Both in Xenopus and in chicken, the protein has presumably a more widespread cellular distribution than hitherto expected but is absent
International Nuclear Information System (INIS)
Park, Ju H.
2007-01-01
This paper considers the robust stability analysis of cellular neural networks with discrete and distributed delays. Based on the Lyapunov stability theory and linear matrix inequality (LMI) technique, a novel stability criterion guaranteeing the global robust convergence of the equilibrium point is derived. The criterion can be solved easily by various convex optimization algorithms. An example is given to illustrate the usefulness of our results
International Nuclear Information System (INIS)
Bumba, V.; Hejna, L.
1988-01-01
From the comparison of several modes of time development of the latitudinal distribution of solar magnetic fields, obtained by different authors using different basic observational material and different methods, the following results were obtained: At high solar latitudes (|φ|> or ∼ 40deg) all distributions agree irrespective of the method of construction. In zones of activity around the solar equator, there is a qualitatively good but quantitatively poor agreement of the integrated, directly observed fields (from Mt. Wilson Observatory) and of the highly integrated fields derived from Hα synoptic charts. The mode of field distribution at high latitudes, more uniform and unipolar, is probably different from the field distribution at low latitudes where the more concentrated leading polarity occupies practically the same area as the less concentrated following polarity fields, if they are highly integrated. The large difference between Makarov's distribution and other modes of distribution seems to be natural if we take the method of construction into account, and very probably represents its close relationship with the smaller magnetic field elements connected with newer activity, while the other types of distribution demonstrate larger-scale, redistributed, older fields. The areas covered by the positive and negative polarities on the whole Sun during the investigated one and a half solar cycles (No 20 and 21) are practically equal. (author). 5 figs., 10 refs
Neti, Prasad V.S.V.; Howell, Roger W.
2010-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086
Impelluso, Thomas J
2003-06-01
An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.
Distributivity of the algebra of regular open subsets of .beta. R / R
Czech Academy of Sciences Publication Activity Database
Balcar, Bohuslav; Hrušák, M.
2005-01-01
Roč. 149, č. 1 (2005), s. 1-7 ISSN 0166-8641 R&D Projects: GA ČR(CZ) GA201/03/0933; GA ČR(CZ) GA201/02/0857 Institutional research plan: CEZ:AV0Z10190503 Keywords : distributivity of Boolean algebras * cardinal invariants of the continuum * Čech-Stone compactification Subject RIV: BA - General Mathematics Impact factor: 0.297, year: 2005
General regularities of Sr 90 distribution in system soil-plant under natural conditions
International Nuclear Information System (INIS)
Gudeliene, I.; Marchiulioniene, D.; Petroshius, R.
2006-01-01
Sr 90 distribution in system 'soil - underground part of plant - aboveground part of plant' was investigated. It was determined that Sr 90 activity concentration in underground and aboveground part of plants and in mosses was not dependent on its activity concentration in soil. There was direct dependence of Sr 90 activity concentration in aboveground on underground parts of plants. Sr 90 transfer factor from soil to underground part of plants and mosses was directly dependent on this radionuclide activity concentration in them. (authors)
Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection
Directory of Open Access Journals (Sweden)
Kang Xie
2015-01-01
Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.
International Nuclear Information System (INIS)
Smith, G.T.; Hubner, K.F.; Goodman, M.M.; Stubbs, J.B.
1992-01-01
Positron emission tomography (PET) has been used to measure tissue radiotracer concentration in vivo. Radiochemical distribution can be determined with compartmental model analysis. A two compartment model describes the kinetics of N-13 ammonia ( 13 NH 3 ) in the myocardium. The model consists of a vascular space, Q 1 and a space for 13 NH 3 bound within the tissue, Q 2 . Differential equations for the model can be written: X(t) = AX(t) + BU( t), Y(t)= CX(t)+ DU(t) (1) where X(t) is a column vector [Q 1 (t); Q 2 (t)], U(t) is the arterial input activity measured from the left ventricular blood pool, and Y(t) is the measured tissue activity using PET. Matrices A, B, C, and D are dependent on physiological parameters describing the kinetics of 13 NH 3 in the myocardium. Estimated parameter matrices in Equation 1 have been validated in dog experiments by measuring myocardial perfusion with dynamic PET scanning and intravenous injection of 13 NH 3 . Tracer concentrations for each compartment can be calculated by direct integration of Equation 1. If the cellular level distribution of each compartment is known, the concentration of tracer within the intracellular and extracellular space can be determined. Applications of this type of modeling include parameter estimation for measurement of physiological processes, organ level dosimetry, and determination of cellular radiotracer distribution
International Nuclear Information System (INIS)
Bracken, W.M.; Sharma, R.P.; Bourcier, D.R.
1984-01-01
A bovine kidney cell culture system was used to assess what relationship mercuric chloride (HgCl 2 ) uptake and subcellular distribution had to cytotoxicity. Twenty-four-hour incubations with 0.05-50 μM HgCl 2 elicited a concentration-related cytotoxicity. Cellular accumulation of 203 Hg was also concentration-related, with 1.0 nmol/10 6 cells at the IC50. Measurement of Hg uptake over the 24-h exposure period revealed a multiphasic process. Peak accumulation was attained by 1 h and was followed by extrusion and plateauing of intracellular Hg levels. Least-squares regression analysis of the cytotoxicity and cellular uptake data indicated a potential relationship between the Hg uptake and cytotoxicity. However, the subcellular distribution of Hg was not concentration-related. Mitochondria and soluble protein fractions accounted for greater than 65% of the cell-associated Hg at all concentrations. The remaining Hg was distributed between the microsomal (6-10%) and nuclear and cell debris (11-22%) fractions at all concentrations tested. Less than 20% of the total cell-associated Hg was bound with metallothionein-like protein. 31 references, 4 figures, 3 tables
López, Leonardo; Burguerner, Germán; Giovanini, Leonardo
2014-04-12
The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. An epidemic is characterized trough an individual-based-model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the
Study of regularities of distributing powdered dietetic additives in coarse dispersed foodstuffs
Directory of Open Access Journals (Sweden)
M. Pogozhikh
2017-12-01
Full Text Available An important intervention in the composition of food products is enrichment of food with micronutrients. In this regard, the authors investigated how the additive with the corresponding trace element will be distributed in the food product, and in this case, in minced meat, in order to meet the human needs for microelements.Micronutrient deficiencies have a significant impact on the nutritional status and health of the population in well developed and developing countries. These deficiencies cause a delay in the growth of children, various diseases, mortality, brain damage, reduced cognitive capacity and the ability of people of all ages. The global scale of micronutrient deficiencies in dietary intakes, in particular the lack of trace elements, has led to the development of powdered dietary supplements containing essential elements that enrich the coarse-type food products to increase their nutritional value. The dietary supplement should provide the daily requirement of trace elements in the human body; therefore, it should be added to the product in a normalized amount and evenly distributed in the product.Two nuclear magnetic resonance (NMR and electron paramagnetic resonance (EPR analysis were performed to determine the distribution of the additive in food products. The analysis was carried out in two stages respectively: study of molecules mobility by measuring the spin-spin relaxation time (Т2 and spin-lattice relaxation (Т1 on a pulsed NMR spectrometer; establishment of a connection between the exponent of the amplitude of the sample A0 and its mass. Based on the data obtained, as a result of the measurement, a curve is constructed for the dependence of the amplitude of the echo signal from the value (time interval between the probing pulses. The spin label used in this work is one of the first variants of a paramagnetic probe-an easily accessible transition metal ion Mn2+. According to the constructed graphs and tomograms from the
International Nuclear Information System (INIS)
Zhang Weiqian; Huang Kaiping; Cheng Guangqing
2012-01-01
In the 504 hydrotherm type mineral deposit, Mo, Hg, Ni, Re, Te, Se element (Mo, Hg are industrial mineral deposit and Ni, Re, Te, Se are scarce element) reach the industrial integrated utilization request, the scarce element widely distributed in acid orebody (upper ore zone) and alkali orebody (lower ore zone). Based on composite samples of uranium ore in the analysis, through computer processing, the linear regression and R-factor analysis, Reveals the relationship between uranium and other elements. They haven't correlation among the U, Hg, Mo. The relation- ship among the Ni, Re, Te, Se is germane. Using this correlation, deep in the deposit and surrounding exploration provides the basis for deep. (authors)
On the Meta Distribution of Coverage Probability in Uplink Cellular Networks
Elsawy, Hesham
2017-04-07
This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.
Directory of Open Access Journals (Sweden)
V. G. Margaryan
2017-12-01
Full Text Available The regularities of the space-temporal distribution of the radiation balance of the underlying surface for the conditions of the mountainous territory of the Republic of Armenia were discussed and analyzed.
International Nuclear Information System (INIS)
Szoke, Istvan; Balashazy, Imre; Farkas, Arpad; Hofmann, Werner
2007-01-01
The human tracheobronchial system has a very complex structure including cylindrical airway ducts connected by airway bifurcation units. The deposition of the inhaled aerosols within the airways exhibits a very inhomogeneous pattern. The formation of deposition hot spots near the carinal ridge has been confirmed by experimental and computational fluid and particle dynamics (CFPD) methods. In spite of these observations, current radon lung dosimetry models apply infinitely long cylinders as models of the airway system and assume uniform deposition of the inhaled radon progenies along the airway walls. The aim of this study is to investigate the effect of airway geometry and non-uniform activity distributions within bronchial bifurcations on cellular dose distributions. In order to answer these questions, the nuclear doses of the bronchial epithelium were calculated in three different irradiation situations. (1) First, CFPD methods were applied to calculate the distribution of the deposited alpha-emitting nuclides in a numerically constructed idealized airway bifurcation. (2) Second, the deposited radionuclides were randomly distributed along the surface of the above-mentioned geometry. (3) Finally, calculations were made in cylindrical geometries corresponding to the parent and daughter branches of the bifurcation geometry assuming random nuclide activity distribution. In all three models, the same 218 Po and 214 Po surface activities per tissue volumes were assumed. Two conclusions can be drawn from this analysis: (i) average nuclear doses are very similar in all three cases (minor differences can be attributed to differences in the linear energy transfer (LET) spectra) and (ii) dose distributions are significantly different in all three cases, with the highest doses at the carinal ridge in case 3. (authors)
Analysis of effect on microdose of 10B nonuniform distribution in cellular
International Nuclear Information System (INIS)
Xie Qin; Geng Changran; Tang Xiaobin; Chen Da
2012-01-01
Boron neutron capture therapy (BNCT) is one of the effective way to treat malignant melanoma and head-neck cancer. The intercellular nonuniform distributions of 10 B in tumor cell impact the estimates of inactivation dose. The α-Li Version l.0 code was developed based on Monte-Carlo method to calculate the S values of cell induced by α and 7 Li particle which are the products of 10 B (n,a) 7 Li. The calculation included two types of cell size, eight kinds of energy of a particle and three kinds of source distributions. Differences between results of this code and an analytical algorithm of MIRD committee were within 1%. On this basis, a total of 3420 cases were calculated and analyzed with different kinds of nucleus radius, cell radius, and source launch position combination. Finally, cellular S values of 10 B (n,a) 7 Li calculated in this paper can be used to compute the excellent precision dose under 10 B compound nonuniform distribution in intercellular scale. (authors)
Directory of Open Access Journals (Sweden)
Merlin Nanayakkara
Full Text Available Celiac disease (CD is a frequent inflammatory intestinal disease, with a genetic background, caused by gliadin-containing food. Undigested gliadin peptides P31-43 and P57-68 induce innate and adaptive T cell-mediated immune responses, respectively. Alterations in the cell shape and actin cytoskeleton are present in celiac enterocytes, and gliadin peptides induce actin rearrangements in both the CD mucosa and cell lines. Cell shape is maintained by the actin cytoskeleton and focal adhesions, sites of membrane attachment to the extracellular matrix. The locus of the human Lipoma Preferred Partner (LPP gene was identified as strongly associated with CD using genome-wide association studies (GWAS. The LPP protein plays an important role in focal adhesion architecture and acts as a transcription factor in the nucleus. In this study, we examined the hypothesis that a constitutive alteration of the cell shape and the cytoskeleton, involving LPP, occurs in a cell compartment far from the main inflammation site in CD fibroblasts from skin explants. We analyzed the cell shape, actin organization, focal adhesion number, focal adhesion proteins, LPP sub-cellular distribution and adhesion to fibronectin of fibroblasts obtained from CD patients on a Gluten-Free Diet (GFD and controls, without and with treatment with A-gliadin peptide P31-43. We observed a "CD cellular phenotype" in these fibroblasts, characterized by an altered cell shape and actin organization, increased number of focal adhesions, and altered intracellular LPP protein distribution. The treatment of controls fibroblasts with gliadin peptide P31-43 mimics the CD cellular phenotype regarding the cell shape, adhesion capacity, focal adhesion number and LPP sub-cellular distribution, suggesting a close association between these alterations and CD pathogenesis.
CAC DPLB MCN: A Distributed Load Balancing Scheme in Multimedia Mobile Cellular Networks
Directory of Open Access Journals (Sweden)
Sharma Abhijit
2016-11-01
Full Text Available The problem of non-uniform traffic demand in different cells of a cellular network may lead to a gross imbalance in the system performance. Thus, the users in hot cells may suffer from low throughput. In this paper, an effective and simple load balancing scheme CAC_DPLB_MCN is proposed that can effectively reduce the overall call blocking. This model considers dealing with multi-media traffic as well as time-varying geographical traffic distribution. The proposed scheme uses the concept of cell-tiering thereby creating fractional frequency reuse environment. A message exchange based distributed scheme instead of centralized one is used which help the proposed scheme be implemented in a multiple hot cell environment also. Furthermore, concept of dynamic pricing is used to serve the best interest of the users as well as for the service providers. The performance of the proposed scheme is compared with two other existing schemes in terms of call blocking probability and bandwidth utilization. Simulation results show that the proposed scheme can reduce the call blocking significantly in highly congested cell with highest bandwidth utilization. Use of dynamic pricing also makes the scheme useful to increase revenue of the service providers in contrast with compared schemes.
Pereira, Marcelo Alves; Martinez, Alexandre Souto
2009-01-01
The Prisoner's Dilemma (PD) game is used in several fields due to the emergence of cooperation among selfish players. Here, we have considered a one-dimensional lattice, where each cell represents a player, that can cooperate or defect. This one-dimensional geometry allows us to retrieve the results obtained for regular lattices and to keep track of the system spatio-temporal evolution. Players play PD with their neighbors and update their state using the Pavlovian Evolutionary Strategy. If t...
Skeletal muscle cellularity and glycogen distribution in the hypermuscular Compact mice
Directory of Open Access Journals (Sweden)
T. Kocsis
2014-07-01
Full Text Available Normal 0 21 false false false HU X-NONE X-NONE MicrosoftInternetExplorer4 The TGF-beta member myostatin acts as a negative regulator of skeletal muscle mass. The Compact mice were selected for high protein content and hypermuscularity, and carry a naturally occurring 12-bp deletion in the propeptide region of the myostatin precursor. We aimed to investigate the cellular characteristics and the glycogen distribution of the Compact tibialis anterior (TA muscle by quantitative histochemistry and spectrophotometry. We have found that the deficiency in myostatin resulted in significantly increased weight of the investigated hindlimb muscles compared to wild type. Although the average glycogen content of the individual fibers kept unchanged, the total amount of glycogen in the Compact TA muscle increased two-fold, which can be explained by the presence of more fibers in Compact compared to wild type muscle. Moreover, the ratio of the most glycolytic IIB fibers significantly increased in the Compact TA muscle, of which glycogen content was the highest among the fast fibers. In summary, myostatin deficiency caused elevated amount of glycogen in the TA muscle but did not increase the glycogen content of the individual fibers despite the marked glycolytic shift observed in Compact mice.
International Nuclear Information System (INIS)
Hofmann, W.; Crawford-Brown, D.J.
1990-01-01
Randomly oriented sections of rat tissue have been digitised to provide the contours of tissue-air interfaces and the locations of individual cell nuclei in the alveolated region of the lung. Sources of alpha particles with varying irradiation geometries and densities are simulated to compute the resulting random pattern of cellular irradiation, i.e. spatial coordinates, frequency, track length, and energy of traversals by the emitted alpha particles. Probabilities per unit track lengths, derived from experimental data on in vitro cellular inactivation and transformation, are then applied to the results of the alpha exposure simulations to yield an estimate of the number of both dead and viable transformed cells and their spatial distributions. If lung cancer risk is linearly related to the number of transformed cells, the carcinogenic risk for hot particles is always smaller than that for a uniform nuclide distribution of the same activity. (author)
Xia, Younan; Huo, Da
2018-04-10
A quantitative understanding of the sub-cellular distributions of nanoparticles uptaken by cells is important to the development of nanomedicine. With Au nanospheres as a model system, here we demonstrate, for the first time, how to quantify the numbers of nanoparticles bound to plasma membrane, accumulated in cytosol, and entrapped in lysosomes, respectively, through stepwise, site-selective etching. Our results indicate that the chance for nanoparticles to escape from lysosomes is insensitive to the presence of targeting ligand although ligand-receptor binding has been documented as a critical factor in triggering internalization. Furthermore, the presence of serum proteins is shown to facilitate the binding of nanoparticles to plasma membrane lacking the specific receptor. Collectively, these findings confirm the potential of stepwise etching in quantitatively analyzing the sub-cellular distributions of nanoparticles uptaken by cells in an effort to optimize the therapeutic effect. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
QoE-Driven D2D Media Services Distribution Scheme in Cellular Networks
Chen, Mingkai; Wang, Lei; Chen, Jianxin; Wei, Xin
2017-01-01
Device-to-device (D2D) communication has been widely studied to improve network performance and considered as a potential technological component for the next generation communication. Considering the diverse users’ demand, Quality of Experience (QoE) is recognized as a new degree of user’s satisfaction for media service transmissions in the wireless communication. Furthermore, we aim at promoting user’s Mean of Score (MOS) value to quantify and analyze user’s QoE in the dynamic cellular netw...
Directory of Open Access Journals (Sweden)
Natassya M Noor
Full Text Available Ubiquitin, an 8.5 kDa protein associated with the proteasome degradation pathway has been recently identified as differentially expressed in segment of cord caudal to site of injury in developing spinal cord. Here we describe ubiquitin expression and cellular distribution in spinal cord up to postnatal day P35 in control opossums (Monodelphis domestica and in response to complete spinal transection (T10 at P7, when axonal growth through site of injury occurs, and P28 when this is no longer possible. Cords were collected 1 or 7 days after injury, with age-matched controls and segments rostral to lesion were studied. Following spinal injury ubiquitin levels (western blotting appeared reduced compared to controls especially one day after injury at P28. In contrast, after injury mRNA expression (qRT-PCR was slightly increased at P7 but decreased at P28. Changes in isoelectric point of separated ubiquitin indicated possible post-translational modifications. Cellular distribution demonstrated a developmental shift between earliest (P8 and latest (P35 ages examined, from a predominantly cytoplasmic immunoreactivity to a nuclear expression; staining level and shift to nuclear staining was more pronounced following injury, except 7 days after transection at P28. After injury at P7 immunostaining increased in neurons and additionally in oligodendrocytes at P28. Mass spectrometry showed two ubiquitin bands; the heavier was identified as a fusion product, likely to be an ubiquitin precursor. Apparent changes in ubiquitin expression and cellular distribution in development and response to spinal injury suggest an intricate regulatory system that modulates these responses which, when better understood, may lead to potential therapeutic targets.
Study of the cellular uptake and distribution of 57cobalt bleomycin in Ehrlich ascites tumor cells
International Nuclear Information System (INIS)
Metelmann, H.R.
1980-01-01
We investigated the dependence of the cellular uptake of 57 cobalt-bleomycin on the exposure time and on the dose. In addition we observed the influences due to the incubation temperature, to the growth phase of the tumor cells and due to the composition of the suspensory medium. In supplementary experiments we investigated the binding of the labelled cytostatic agent to erythrocytes, its adsorption to broken Ehrlich ascites tumor cells and the 57 cobalt-bleomycin outflow from pre-loaded intact Ehrlich ascites tumor cells. The 57 cobalt-bleomycin uptake of intact Ehrlich ascites tumor cells is determined by characteristic kinetics. Moreover, the erythrocytes and injured Ehrlich ascites tumor cells show a qualitatively similar graph of the 57 cobalt-bleomycin binding, which can clearly be distinguished from the kinetics found with intact Ehrlich ascites tumor cells. The uptake of this cytostatic agent depends on an unequivocal time-dose-temperature relationship. The transport mechanism of the 57 cobalt-bleomycin uptake was considered as endocytosis. An endocytosis-stimulating inducer could not be detected. However, we obtained indications that the cell-bound cytostatic agent is taken up in two compartments: on the cellular surface and in the interior of the cell. (orig./MG) [de
Stochastic fluctuations and distributed control of gene expression impact cellular memory.
Directory of Open Access Journals (Sweden)
Guillaume Corre
Full Text Available Despite the stochastic noise that characterizes all cellular processes the cells are able to maintain and transmit to their daughter cells the stable level of gene expression. In order to better understand this phenomenon, we investigated the temporal dynamics of gene expression variation using a double reporter gene model. We compared cell clones with transgenes coding for highly stable mRNA and fluorescent proteins with clones expressing destabilized mRNA-s and proteins. Both types of clones displayed strong heterogeneity of reporter gene expression levels. However, cells expressing stable gene products produced daughter cells with similar level of reporter proteins, while in cell clones with short mRNA and protein half-lives the epigenetic memory of the gene expression level was completely suppressed. Computer simulations also confirmed the role of mRNA and protein stability in the conservation of constant gene expression levels over several cell generations. These data indicate that the conservation of a stable phenotype in a cellular lineage may largely depend on the slow turnover of mRNA-s and proteins.
Directory of Open Access Journals (Sweden)
Francis Boabang
2017-01-01
Full Text Available Device-to-device (D2D communication underlaid cellular network is considered a key integration feature in future cellular network. However, without properly designed interference management, the interference from D2D transmission tends to degrade the performance of cellular users and D2D pairs. In this work, we proposed a network-assisted distributed interference mitigation scheme to address this issue. Specifically, the base station (BS acts as a control agent that coordinates the cross-tier interference from D2D transmission through a taxation scheme. The cotier interference is controlled by noncooperative game amongst D2D pairs. In general, the outcome of noncooperative game is inefficient due to the selfishness of each player. In our game formulation, reference user who is the victim of cotier interference is factored into the payoff function of each player to obtain fair and efficient outcome. The existence, uniqueness of the Nash Equilibrium (NE, and the convergence of the proposed algorithm are characterized using Variational Inequality theory. Finally, we provide simulation results to evaluate the efficiency of the proposed algorithm.
de Jong, Peter W; Hemerik, Lia; Gort, Gerrit; van Alphen, Jacques J M
2011-01-01
Females of the larval parasitoid of Drosophila, Asobara citri, from sub-Saharan Africa, defend patches with hosts by fighting and chasing conspecific females upon encounter. Females of the closely related, palearctic species Asobara tabida do not defend patches and often search simultaneously in the same patch. The effect of patch defence by A. citri females on their distribution in a multi-patch environment was investigated, and their distributions were compared with those of A. tabida. For both species 20 females were released from two release-points in replicate experiments. Females of A. citri quickly reached a regular distribution across 16 patches, with a small variance/mean ratio per patch. Conversely, A. tabida females initially showed a clumped distribution, and after gradual dispersion, a more Poisson-like distribution across patches resulted (variance/mean ratio was closer to 1 and higher than for A. citri). The dispersion of A. tabida was most probably an effect of exploitation: these parasitoids increasingly made shorter visits to already exploited patches. We briefly discuss hypotheses on the adaptive significance of patch defence behaviour or its absence in the light of differences in the natural history of both parasitoid species, notably the spatial distribution of their hosts.
Adabi, Sepideh; Adabi, Sahar; Rezaee, Ali
According to the traditional definition of Wireless Sensor Networks (WSNs), static sensors have limited the feasibility of WSNs in some kind of approaches, so the mobility was introduced in WSN. Mobile nodes in a WSN come equipped with battery and from the point of deployment, this battery reserve becomes a valuable resource since it cannot be replenished. Hence, maximizing the network lifetime by minimizing the energy is an important challenge in Mobile WSN. Energy conservation can be accomplished by different approaches. In this paper, we presented an energy conservation solution based on Cellular Automata. The main objective of this solution is based on dynamically adjusting the transmission range and switching between operational states of the sensor nodes.
Ji, Mei; Li, Qiang; Ji, Hua; Lou, Hongxiang
2014-01-01
This study aims to investigate the change trend of resveratrol contents in different tissues of Vitis amurensis Rupr. during the different seasons in a year. A rapid and sensitive method using high-performance liquid chromatography coupled with diode array detection and tandem mass spectrometry was developed. Resveratrol is mainly distributed in the rhizomes and roots of grape plants. It is also found in leaves and vines, but to a lesser extent. Resveratrol contents are augmented gradually in rhizomes and roots from January to September, and then decrease until January of the following year. During grape ripening, grape skins are also an available source of resveratrol. In conclusion, V. amurensis is a rich source of resveratrol. The distribution of resveratrol in V. amurensis reported in this study can contribute to the future application of resveratrol. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Evgeny Bychkov
Full Text Available G protein-coupled receptor kinases (GRKs and arrestins mediate desensitization of G protein-coupled receptors (GPCR. Arrestins also mediate G protein-independent signaling via GPCRs. Since GRK and arrestins demonstrate no strict receptor specificity, their functions in the brain may depend on their cellular complement, expression level, and subcellular targeting. However, cellular expression and subcellular distribution of GRKs and arrestins in the brain is largely unknown. We show that GRK isoforms GRK2 and GRK5 are similarly expressed in direct and indirect pathway neurons in the rat striatum. Arrestin-2 and arrestin-3 are also expressed in neurons of both pathways. Cholinergic interneurons are enriched in GRK2, arrestin-3, and GRK5. Parvalbumin-positive interneurons express more of GRK2 and less of arrestin-2 than medium spiny neurons. The GRK5 subcellular distribution in the human striatal neurons is altered by its phosphorylation: unphosphorylated enzyme preferentially localizes to synaptic membranes, whereas phosphorylated GRK5 is found in plasma membrane and cytosolic fractions. Both GRK isoforms are abundant in the nucleus of human striatal neurons, whereas the proportion of both arrestins in the nucleus was equally low. However, overall higher expression of arrestin-2 yields high enough concentration in the nucleus to mediate nuclear functions. These data suggest cell type- and subcellular compartment-dependent differences in GRK/arrestin-mediated desensitization and signaling.
Swanson, C.; Jandovitz, P.; Cohen, S. A.
2018-02-01
We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.
Secure Real-Time Monitoring and Management of Smart Distribution Grid Using Shared Cellular Networks
Nielsen, J.J.; Ganem, H.; Jorguseski, L.; Alic, K.; Smolnikar, M.; Zhu, Z.; Pratas, N.K.; Golinski, M.; Zhang, H.; Kuhar, U.; Fan, Z.; Svigelj, A.
2017-01-01
Electricity production and distribution is facing two major changes. First, production is shifting from classical energy sources such as coal and nuclear power toward renewable resources such as solar and wind. Second, consumption in the low voltage grid is expected to grow significantly due to the
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jie [MOE Key Laboratory of Environment Remediation and Ecosystem Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China); Tian, Shengke [MOE Key Laboratory of Environment Remediation and Ecosystem Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China); University of Florida, Institute of Food and Agricultural Science, Indian River Research and Education Center, Fort Pierce, FL 34945 (United States); Lu, Lingli; Shohag, M.J.I.; Liao, Haibing [MOE Key Laboratory of Environment Remediation and Ecosystem Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China); Yang, Xiaoe, E-mail: xyang@zju.edu.cn [MOE Key Laboratory of Environment Remediation and Ecosystem Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China)
2011-12-15
Highlights: Black-Right-Pointing-Pointer Elsholtzia splendens had a good ability of lead tolerance and accumulation. Black-Right-Pointing-Pointer Pb was mostly restricted to the vascular bundles and epidermis tissues. Black-Right-Pointing-Pointer Pb and Ca shared most similar distribution patterns in E. splendens. - Abstract: Hydroponic experiments were conducted to investigate the tolerance and spatial distribution of lead (Pb) in Elsholtzia splendens-a copper (Cu) accumulator plant using synchrotron-based micro-X-ray fluorescence. According to chlorophyll concentration and chlorophyll fluorescence parameters, E. splendens displayed certain tolerance at 100 {mu}M Pb treatment. Lead concentration in roots, stems and leaves of E. splendens reached 45,183.6, 1657.6, and 380.9 mg kg{sup -1}, respectively. Pb was mostly accumulated in the roots, and there were also high concentrations of Pb been transported into stems and leaves. Micro-XRF analysis of the stem and leaf cross section revealed that Pb was mostly restricted in the vascular bundles and epidermis tissues of both stem and leaf of E. splendens. The correlation between distribution of K, Ca, Zn and Pb were analyzed. There were significant positive correlations (P < 0.01) among Pb and Ca, K, Zn distribution both in stem and leaf of E. splendens. However, among the three elements, Ca shared the most similar distribution pattern and the highest correlation coefficients with Pb in both stem and leaf cross section of E. splendens. This suggests that Ca may play an important role in Pb accumulation in stem and leaf of E. splendens.
Haase, A.; Tentschert, J.; Jungnickel, H.; Graf, P.; Mantion, A.; Draude, F.; Plendl, J.; Goetz, M. E.; Galla, S.; Mašić, A.; Thuenemann, A. F.; Taubert, A.; Arlinghaus, H. F.; Luch, A.
2011-07-01
Silver nanoparticles (SNP) are among the most commercialized nanoparticles worldwide. They can be found in many diverse products, mostly because of their antibacterial properties. Despite its widespread use only little data on possible adverse health effects exist. It is difficult to compare biological data from different studies due to the great variety in sizes, coatings or shapes of the particles. Here, we applied a novel synthesis approach to obtain SNP, which are covalently stabilized by a small peptide. This enables a tight control of both size and shape. We applied these SNP in two different sizes of 20 or 40 nm (Ag20Pep and Ag40Pep) and analyzed responses of THP-1-derived human macrophages. Similar gold nanoparticles with the same coating (Au20Pep) were used for comparison and found to be non-toxic. We assessed the cytotoxicity of particles and confirmed their cellular uptake via transmission electron microscopy and confocal Raman microscopy. Importantly a majority of the SNP could be detected as individual particles spread throughout the cells. Furthermore we studied several types of oxidative stress related responses such as induction of heme oxygenase I or formation of protein carbonyls. In summary, our data demonstrate that even low doses of SNP exerted adverse effects in human macrophages.
International Nuclear Information System (INIS)
Haase, A; Tentschert, J; Jungnickel, H; Goetz, M E; Luch, A; Graf, P; Mantion, A; Thuenemann, A F; Draude, F; Galla, S; Arlinghaus, H F; Plendl, J; Masic, A; Taubert, A
2011-01-01
Silver nanoparticles (SNP) are among the most commercialized nanoparticles worldwide. They can be found in many diverse products, mostly because of their antibacterial properties. Despite its widespread use only little data on possible adverse health effects exist. It is difficult to compare biological data from different studies due to the great variety in sizes, coatings or shapes of the particles. Here, we applied a novel synthesis approach to obtain SNP, which are covalently stabilized by a small peptide. This enables a tight control of both size and shape. We applied these SNP in two different sizes of 20 or 40 nm (Ag20Pep and Ag40Pep) and analyzed responses of THP-1-derived human macrophages. Similar gold nanoparticles with the same coating (Au20Pep) were used for comparison and found to be non-toxic. We assessed the cytotoxicity of particles and confirmed their cellular uptake via transmission electron microscopy and confocal Raman microscopy. Importantly a majority of the SNP could be detected as individual particles spread throughout the cells. Furthermore we studied several types of oxidative stress related responses such as induction of heme oxygenase I or formation of protein carbonyls. In summary, our data demonstrate that even low doses of SNP exerted adverse effects in human macrophages.
Couvreur, P
2001-07-01
Drug vectorization has undergone considerable development over the last few years. This review focuses on the intravenous route of administration. Colloid formulations allow a modulation of drug tissue distribution. Using liposomes and nanoparticles with unmodified surfaces, drugs can be targeted to macrophages of the reticulum endothelium system. When the liposomes or nanoparticles are covered with hydrophilic or flexible polymers, the vascular phase can be favored in order, for example, to facilitate selective extravasation at a tumor site. Therapeutic applications of these systems are presented. The development of "intelligent" vectors capable of modulating intracellular distribution of an active compounds is an equally interesting approach, for example pH-sensitive liposomes or nanoparticles decorated with folic acid capable of targeting intracellular cytoplasm.
International Nuclear Information System (INIS)
Deng Guilong; Chen Chunying; Zhang Peiqun; Zhao Jiujiang; Chai Zhifang
2005-01-01
The distribution patterns of 17 elements in the subcellular fractions of nuclei, mitochondria, lysosome, microsome and cytosol of human hepatocellular carcinoma (HCC) and normal liver samples were investigated by using molecular activation analysis (MAA) and differential centrifugation. Their significant difference was checked by the Studient's t-test. These elements exhibit inhomogeneous distributions in each subcellular fraction. Some elements have no significant difference between hepatocellular carcinoma and normal liver samples. However, the concentrations of Br, Ca, Cd and Cs are significantly higher in each component of hepatocarcinoma than in normal liver. The content of Fe in microsome of HCC is significantly lower, almost half of normal liver samples, but higher in other subcellular fractions than in those of normal tissues. The rare earth elements of La and Ce have the patterns similar to Fe. The concentrations of Sb and Zn in nuclei of HCC are obviously lower (P<0.05, P<0.05). The contents of K and Na are higher in cytosol of HCC (P<0.05). The distributions of Ba and Rb show no significant difference between two groups. The relationships of Fe, Cd and K with HCC were also discussed. The levels of some elements in subcellular fractions of tumor were quite different from those of normal liver, which suggested that trace elements might play important roles in the occurrence and development of hepatocellular carcinoma. (authors)
Energy Technology Data Exchange (ETDEWEB)
Guilong, Deng [Chinese Academy of Sciences, Beijing (China). Inst. of High Energy Physics, Key Laboratory of Nuclear Analytical Techniques; Department of General Surgery, the Second Affiliated Hospital, School of Medicine, Zhejiang Univ., Hangzhou (China); Chunying, Chen; Peiqun, Zhang; Jiujiang, Zhao; Zhifang, Chai [Chinese Academy of Sciences, Beijing (China). Inst. of High Energy Physics, Key Laboratory of Nuclear Analytical Techniques; Yingbin, Liu; Jianwei, Wang; Bin, Xu; Shuyou, Peng [Department of General Surgery, the Second Affiliated Hospital, School of Medicine, Zhejiang Univ., Hangzhou (China)
2005-07-15
The distribution patterns of 17 elements in the subcellular fractions of nuclei, mitochondria, lysosome, microsome and cytosol of human hepatocellular carcinoma (HCC) and normal liver samples were investigated by using molecular activation analysis (MAA) and differential centrifugation. Their significant difference was checked by the Studient's t-test. These elements exhibit inhomogeneous distributions in each subcellular fraction. Some elements have no significant difference between hepatocellular carcinoma and normal liver samples. However, the concentrations of Br, Ca, Cd and Cs are significantly higher in each component of hepatocarcinoma than in normal liver. The content of Fe in microsome of HCC is significantly lower, almost half of normal liver samples, but higher in other subcellular fractions than in those of normal tissues. The rare earth elements of La and Ce have the patterns similar to Fe. The concentrations of Sb and Zn in nuclei of HCC are obviously lower (P<0.05, P<0.05). The contents of K and Na are higher in cytosol of HCC (P<0.05). The distributions of Ba and Rb show no significant difference between two groups. The relationships of Fe, Cd and K with HCC were also discussed. The levels of some elements in subcellular fractions of tumor were quite different from those of normal liver, which suggested that trace elements might play important roles in the occurrence and development of hepatocellular carcinoma. (authors)
Directory of Open Access Journals (Sweden)
Marcus V Buri
Full Text Available Many reports have shown that antimicrobial peptides exhibit anticancer abilities. Gomesin (Gm exhibits potent cytotoxic activity against cancer cells by a membrane pore formation induced after well-orchestrated intracellular mechanisms. In this report, the replacements of the Cys by Ser or Thr, and the use D-amino acids in the Gm structure were done to investigate the importance of the resistance to degradation of the molecule with its cytotoxicity. [Thr(2,6,11,15]-Gm, and [Ser(2,6,11,15]-Gm exhibits low cytotoxicity, and low resistance to degradation, and after 24 h are present in localized area near to the membrane. Conversely, the use of D-amino acids in the analogue [D-Thr(2,6,11,15]-D-Gm confers resistance to degradation, increases its potency, and maintained this peptide spread in the cytosol similarly to what happens with Gm. Replacements of Cys by Thr and Gln by L- or D-Pro ([D-Thr(2,6,11,15, Pro(9]-D-Gm, and [Thr(2,6,11,15, D-Pro(9]-Gm, which induced a similar β-hairpin conformation, also increase their resistance to degradation, and cytotoxicity, but after 24 h they are not present spread in the cytosol, exhibiting lower cytotoxicity in comparison to Gm. Additionally, chloroquine, a lysosomal enzyme inhibitor potentiated the effect of the peptides. Furthermore, the binding and internalization of peptides was determined, but a direct correlation among these factors was not observed. However, cholesterol ablation, which increase fluidity of cellular membrane, also increase cytotoxicity and internalization of peptides. β-hairpin spatial conformation, and intracellular localization/target, and the capability of entry are important properties of gomesin cytotoxicity.
Buri, Marcus V; Domingues, Tatiana M; Paredes-Gamero, Edgar J; Casaes-Rodrigues, Rafael L; Rodrigues, Elaine Guadelupe; Miranda, Antonio
2013-01-01
Many reports have shown that antimicrobial peptides exhibit anticancer abilities. Gomesin (Gm) exhibits potent cytotoxic activity against cancer cells by a membrane pore formation induced after well-orchestrated intracellular mechanisms. In this report, the replacements of the Cys by Ser or Thr, and the use D-amino acids in the Gm structure were done to investigate the importance of the resistance to degradation of the molecule with its cytotoxicity. [Thr(2,6,11,15)]-Gm, and [Ser(2,6,11,15)]-Gm exhibits low cytotoxicity, and low resistance to degradation, and after 24 h are present in localized area near to the membrane. Conversely, the use of D-amino acids in the analogue [D-Thr(2,6,11,15)]-D-Gm confers resistance to degradation, increases its potency, and maintained this peptide spread in the cytosol similarly to what happens with Gm. Replacements of Cys by Thr and Gln by L- or D-Pro ([D-Thr(2,6,11,15), Pro(9)]-D-Gm, and [Thr(2,6,11,15), D-Pro(9)]-Gm), which induced a similar β-hairpin conformation, also increase their resistance to degradation, and cytotoxicity, but after 24 h they are not present spread in the cytosol, exhibiting lower cytotoxicity in comparison to Gm. Additionally, chloroquine, a lysosomal enzyme inhibitor potentiated the effect of the peptides. Furthermore, the binding and internalization of peptides was determined, but a direct correlation among these factors was not observed. However, cholesterol ablation, which increase fluidity of cellular membrane, also increase cytotoxicity and internalization of peptides. β-hairpin spatial conformation, and intracellular localization/target, and the capability of entry are important properties of gomesin cytotoxicity.
Buri, Marcus V.; Domingues, Tatiana M.; Paredes-Gamero, Edgar J.; Casaes-Rodrigues, Rafael L.; Rodrigues, Elaine Guadelupe; Miranda, Antonio
2013-01-01
Many reports have shown that antimicrobial peptides exhibit anticancer abilities. Gomesin (Gm) exhibits potent cytotoxic activity against cancer cells by a membrane pore formation induced after well-orchestrated intracellular mechanisms. In this report, the replacements of the Cys by Ser or Thr, and the use D-amino acids in the Gm structure were done to investigate the importance of the resistance to degradation of the molecule with its cytotoxicity. [Thr2,6,11,15]-Gm, and [Ser2,6,11,15]-Gm exhibits low cytotoxicity, and low resistance to degradation, and after 24 h are present in localized area near to the membrane. Conversely, the use of D-amino acids in the analogue [D-Thr2,6,11,15]-D-Gm confers resistance to degradation, increases its potency, and maintained this peptide spread in the cytosol similarly to what happens with Gm. Replacements of Cys by Thr and Gln by L- or D-Pro ([D-Thr2,6,11,15, Pro9]-D-Gm, and [Thr2,6,11,15, D-Pro9]-Gm), which induced a similar β-hairpin conformation, also increase their resistance to degradation, and cytotoxicity, but after 24 h they are not present spread in the cytosol, exhibiting lower cytotoxicity in comparison to Gm. Additionally, chloroquine, a lysosomal enzyme inhibitor potentiated the effect of the peptides. Furthermore, the binding and internalization of peptides was determined, but a direct correlation among these factors was not observed. However, cholesterol ablation, which increase fluidity of cellular membrane, also increase cytotoxicity and internalization of peptides. β-hairpin spatial conformation, and intracellular localization/target, and the capability of entry are important properties of gomesin cytotoxicity. PMID:24312251
Directory of Open Access Journals (Sweden)
Roberts Mike D
2010-11-01
Full Text Available Abstract Background This study's purpose investigated the impact of different macronutrient distributions and varying caloric intakes along with regular exercise for metabolic and physiological changes related to weight loss. Methods One hundred forty-one sedentary, obese women (38.7 ± 8.0 yrs, 163.3 ± 6.9 cm, 93.2 ± 16.5 kg, 35.0 ± 6.2 kg•m-2, 44.8 ± 4.2% fat were randomized to either no diet + no exercise control group (CON a no diet + exercise control (ND, or one of four diet + exercise groups (high-energy diet [HED], very low carbohydrate, high protein diet [VLCHP], low carbohydrate, moderate protein diet [LCMP] and high carbohydrate, low protein [HCLP] in addition to beginning a 3x•week-1 supervised resistance training program. After 0, 1, 10 and 14 weeks, all participants completed testing sessions which included anthropometric, body composition, energy expenditure, fasting blood samples, aerobic and muscular fitness assessments. Data were analyzed using repeated measures ANOVA with an alpha of 0.05 with LSD post-hoc analysis when appropriate. Results All dieting groups exhibited adequate compliance to their prescribed diet regimen as energy and macronutrient amounts and distributions were close to prescribed amounts. Those groups that followed a diet and exercise program reported significantly greater anthropometric (waist circumference and body mass and body composition via DXA (fat mass and % fat changes. Caloric restriction initially reduced energy expenditure, but successfully returned to baseline values after 10 weeks of dieting and exercising. Significant fitness improvements (aerobic capacity and maximal strength occurred in all exercising groups. No significant changes occurred in lipid panel constituents, but serum insulin and HOMA-IR values decreased in the VLCHP group. Significant reductions in serum leptin occurred in all caloric restriction + exercise groups after 14 weeks, which were unchanged in other non
International Nuclear Information System (INIS)
Zotina, T.A.; Kalacheva, G.S.; Bolsunovsky, A.Ya.
2011-01-01
Accumulation of americium ( 241 Am) and plutonium ( 238,242 Pu) and their distribution in cell compartments and biochemical components of the biomass of freshwater aquatic plants Elodea canadensis, Ceratophyllum demersum and Myrioplyllum spicatum and aquatic moss Fontinalis antipyretica have been investigated in laboratory experiments. Americium and plutonium taken up from water by Elodea canadensis apical shoots were mainly absorbed by structural components of plant cells (90% for 241 Am; 89% for 238 Pu and 82-87% for 242 Pu). About 10-18% of isotope activity was recorded in the cytosol fraction. The major concentration (76-92%) of americium was bound to cell wall cellulose-like polysaccharides of Elodea canadensis, Myriophyllum spicatum, Ceratophyllum demersum and Fontinalis antipyretica, 8-24% of americium activity was registered in the fraction of proteins and carbohydrates, and just a minor concentration (<1%) in the lipid fraction. The distribution of plutonium in the biomass fractions of Elodea was similar to that of americium. Hence, americium and plutonium had the highest affinity to cellulose-like polysaccharides of cell walls of freshwater submerged macrophytes. (author)
Zargarian, A; Esfahanian, M; Kadkhodapour, J; Ziaei-Rad, S
2014-09-01
Effect of solid distribution between edges and vertices of three-dimensional cellular solid with an open-cell structure was investigated both numerically and experimentally. Finite element analysis (FEA) with continuum elements and appropriate periodic boundary condition was employed to calculate the elastic properties of cellular solids using tetrakaidecahedral (Kelvin) unit cell. Relative densities between 0.01 and 0.1 and various values of solid fractions were considered. In order to validate the numerical model, three scaffolds with the relative density of 0.08, but different amounts of solid in vertices, were fabricated via 3-D printing technique. Good agreement was observed between numerical simulation and experimental results. Results of numerical simulation showed that, at low relative densities (solid fraction in vertices. By fitting a curve to the data obtained from the numerical simulation and considering the relative density and solid fraction in vertices, empirical relations were derived for Young׳s modulus and Poisson׳s ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Konno, S.; Wu, J.M.; Chiao, J.W.
1986-01-01
Addition of nicotine causes a dose- and time-dependent inhibition of cell growth in the human promyelocytic HL-60 leukemia cells, with 4 mM nicotine resulting in a 50% inhibition of cellular proliferation after 48-50h. Accompanying the anticellular effect of nicotine is a significant change in the cell cycle distribution of HL-60 cells. For example, treatment with 4 mM nicotine for 20h causes an increase in the proportion of G1-phase cells (from 49% to 57%) and a significant decrease in the proportion of S-phase cells (from 41% to 32%). These results suggest that nicotine causes partial cell arrest in the G-1 phase which may in part account for its effects on cell growth. To determine whether nicotine changes the cellular uptake/transport to macromolecular precursors, HL-60 cells were treated with 216 mM nicotine for 30h, at the end of which time cells were labelled with ( 3 H)thymidine, ( 3 H)uridine, ( 14 C)lysine and( 35 S)methionine, the trichloroacetic acid soluble and insoluble radioactivities from each of the labelling conditions were determined. These studies show that nicotine mainly affects the ''de novo synthesis'' of proteins. (author)
Patterns of Cellular Distribution with the Sentinel Node Positive for Breast Cancer
International Nuclear Information System (INIS)
Tsiapali, E.; Schmidt, M.M.; Dizon, D.; Steinhoff, M.; Gass, J.
2011-01-01
Background. Sentinel node biopsy (SNB) represents the standard of care in breast cancer axillary evaluation. Our study aims to characterize the patterns of malignant cell distribution within the sentinel nodes (SN). Methods. In a retrospective IRB-approved study, we examined the anatomic location of the nodal area with the highest radioactive signal or most intense blue staining (hot spot) and its distance from the metastatic foci. Results. 58 patients underwent SNB between January 2006 and February 2007. 12 patients with 19 positive SN were suitable for analysis. 4 (21%) metastases were located in the nodal hilum and 15 (79%) in the cortex. 6 (31%) metastases were found adjacent to the hotspot, and 9 (47%) within 4 mm of the hotspot. Conclusions. In our pilot series, SN metastases were within 4 mm of the hotspot in 78% of the cases. Pathologic analysis focused in that area may contribute to the more accurate identification of nodal metastases
Nakamura, Tatsufumi; Matsuyama, Naoki; Kirino, Masato; Kasai, Masanori; Kiyohara, Sadao; Ikenaga, Takanori
2017-01-01
The gustatory system of the sea catfish Plotosus japonicus, like that of other catfishes, is highly developed. To clarify the details of the morphology of the peripheral gustatory system of Plotosus, we used whole-mount immunohistochemistry to investigate the distribution and innervation of the taste buds within multiple organs including the barbels, oropharyngeal cavity, fins (pectoral, dorsal, and caudal), and trunk. Labeled taste buds could be observed in all the organs examined. The density of the taste buds was higher along the leading edges of the barbels and fins; this likely increases the chance of detecting food. In all the fins, the taste buds were distributed in linear arrays parallel to the fin rays. Labeling of nerve fibers by anti-acetylated tubulin antibody showed that the taste buds within each sensory field are innervated in different ways. In the barbels, large nerve bundles run along the length of the organ, with fascicles branching off to innervate polygonally organized groups of taste buds. In the fins, nerve bundles run along the axis of fin rays to innervate taste buds lying in a line. In each case, small fascicles of fibers branch from large bundles and terminate within the basal portions of the taste buds. Serotonin immunohistochemistry demonstrated that most of the taste buds in all the organs examined contained disk-shaped serotonin-immunopositive cells in their basal region. This indicates a similar organization of the taste buds, in terms of the existence of serotonin-immunopositive basal cells, across the different sensory fields in this species. © 2017 S. Karger AG, Basel.
Kaur, Imit; Terrazas, Moises; Kosak, Ken M.; Kern, Steven E.; Boucher, Kenneth M.; Shami, Paul J.
2014-01-01
Objective Nitric oxide (NO) possesses anti-tumor activity. It induces differentiation and apoptosis in acute myeloid leukemia (AML) cells. The NO prodrug O2-(2,4-dinitrophenyl)1-[(4-ethoxycarbonyl)piperazin-1-yl]diazen-1-ium-1,2-diolate, or JS-K, has potent antileukemic activity. JS-K is also active in vitro and in vivo against multiple myeloma, prostate cancer, non-small cell lung cancer, glioma and liver cancer. Using the Pluronic® P123 polymer, we have developed a micelle formulation for JS-K in order to increase its solubility and stability. The goal of the current study was to investigate the cellular distribution of JS-K in AML cells. Methods We investigated the intracellular distribution of JS-K (free drug) and JS-K formulated in P123 micelles (P123/JS-K) using HL-60 AML cells. We also studied the S-glutathionylating effects of JS-K on proteins in the cytoplasmic and nuclear cellular fractions. Key findings Both free JS-K and P123/JS-K accumulate primarily in the nucleus. Both free JS-K and P123/JS-K induced S-glutathionylation of nuclear proteins, although the effect produced was more pronounced with P123/JS-K. Minimal S-glutathionylation of cytoplasmic proteins was observed. Conclusions We conclude that a micelle formulation of JS-K increases its accumulation in the nucleus. Post-translational protein modification through S-glutathionylation may contribute to JS-K’s anti-leukemic properties. PMID:23927471
Crisanto-Neto, J. C.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.
2016-09-01
In practice, the Lévy α-stable distribution is usually expressed in terms of the Fourier integral of its characteristic function. Indeed, known closed form expressions are relatively scarce given the huge parameters space: 0\\lt α ≤slant 2 ({{L\\'{e}vy}} {{index}}), -1≤slant β ≤slant 1 ({{skewness}}),σ \\gt 0 ({{scale}}), and -∞ \\lt μ \\lt ∞ ({{shift}}). Hence, systematic efforts have been made towards the development of proper methods for analytically solving the mentioned integral. As a further contribution in this direction, here we propose a new way to tackle the problem. We consider an approach in which one first solves the Fourier integral through a formal (thus not necessarily convergent) series representation. Then, one uses (if necessary) a pertinent sum-regularization procedure to the resulting divergent series, so as to obtain an exact formula for the distribution, which is amenable to direct numerical calculations. As a concrete study, we address the centered, symmetric, unshifted and unscaled distribution (β =0, μ =0, σ =1), with α ={α }M=2/M, M=1,2,3\\ldots . Conceivably, the present protocol could be applied to other sets of parameter values.
Laminar and Cellular Distribution of Monoamine Receptors in Rat Medial Prefrontal Cortex
Directory of Open Access Journals (Sweden)
Noemí Santana
2017-09-01
Full Text Available The prefrontal cortex (PFC is deeply involved in higher brain functions, many of which are altered in psychiatric conditions. The PFC exerts a top-down control of most cortical and subcortical areas through descending pathways and is densely innervated by axons emerging from the brainstem monoamine cell groups, namely, the dorsal and median raphe nuclei (DR and MnR, respectively, the ventral tegmental area and the locus coeruleus (LC. In turn, the activity of these cell groups is tightly controlled by afferent pathways arising from layer V PFC pyramidal neurons. The reciprocal connectivity between PFC and monoamine cell groups is of interest to study the pathophysiology and treatment of severe psychiatric disorders, such as major depression and schizophrenia, inasmuch as antidepressant and antipsychotic drugs target monoamine receptors/transporters expressed in these areas. Here we review previous reports examining the presence of monoamine receptors in pyramidal and GABAergic neurons of the PFC using double in situ hybridization. Additionally, we present new data on the quantitative layer distribution (layers I, II–III, V, and VI of monoamine receptor-expressing cells in the cingulate (Cg, prelimbic (PrL and infralimbic (IL subfields of the medial PFC (mPFC. The receptors examined include serotonin 5-HT1A, 5-HT2A, 5-HT2C, and 5-HT3, dopamine D1 and D2 receptors, and α1A-, α1B-, and α1D-adrenoceptors. With the exception of 5-HT3 receptors, selectively expressed by layers I–III GABA interneurons, the rest of monoamine receptors are widely expressed by pyramidal and GABAergic neurons in intermediate and deep layers of mPFC (5-HT2C receptors are also expressed in layer I. This complex distribution suggests that monoamines may modulate the communications between PFC and cortical/subcortical areas through the activation of receptors expressed by neurons in intermediate (e.g., 5-HT1A, 5-HT2A, α1D-adrenoceptors, dopamine D1 receptors and deep
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
DEFF Research Database (Denmark)
Gottfredsen, Randi Heidemann; Goldstrohm, David; Hartney, John
2014-01-01
and associated with the cell surface via the extracellular matrix (ECM)-binding region. Upon cellular activation induced by lipopolysaccharide, EC-SOD is relocated and detected both in the cell culture medium and in lipid raft structures. Although the secreted material presented a significantly reduced ligand......-binding capacity, this could not be correlated to proteolytic removal of the ECM-binding region, because the integrity of the material recovered from the medium was comparable to that of the cell surface-associated protein. The naturally occurring R213G amino acid substitution located in the ECM-binding region...
Directory of Open Access Journals (Sweden)
Timofei K. Zlobin
2012-01-01
Full Text Available The catastrophic Simushir earthquake occurred on 15 November 2006 in the Kuril-Okhotsk region in the Middle Kuril Islands which is a transition zone between the Eurasian continent and the Pacific Ocean. It was followed by numerous strong earthquakes. It is established that the catastrophic earthquake was prepared on a site characterized by increased relative effective pressures which is located at the border of the low-pressure area (Figure 1.Based on data from GlobalCMT (Harvard, earthquake focal mechanisms were reconstructed, and tectonic stresses, the seismotectonic setting and the earthquakes distribution pattern were studied for analysis of the field of stresses in the region before to the Simushir earthquake (Figures 2 and 3; Table 1.Five areas of various types of movement were determined. Three of them are stretched along the Kuril Islands. It is established that seismodislocations in earthquake focal areas are regularly distributed. In each of the determined areas, displacements of a specific type (shear or reverse shear are concentrated and give evidence of the alteration and change of zones characterized by horizontal stretching and compression.The presence of the horizontal stretching and compression zones can be explained by a model of subduction (Figure 4. Detailed studies of the state of stresses of the Kuril region confirm such zones (Figure 5. Recent GeodynamicsThe established specific features of tectonic stresses before the catastrophic Simushir earthquake of 15 November 2006 contribute to studies of earthquake forecasting problems. The state of stresses and the geodynamic conditions suggesting occurrence of new earthquakes can be assessed from the data on the distribution of horizontal compression, stretching and shear areas of the Earth’s crust and the upper mantle in the Kuril region.
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
International Nuclear Information System (INIS)
Hildebrand, C.E.; Walters, R.A.
1977-01-01
Progress is reported on the following research projects: chromatin structure; the use of circular synthetic polydeoxynucleotides as substrates for the study of DNA repair enzymes; human cellular kinetic response following exposure to DNA-interactive compounds; histone phosphorylation and chromatin structure in cell proliferation; photoaddition products induced in chromatin by uv light; pollutants and genetic information transfer; altered RNA metabolism as a function of cadmium accumulation and intracellular distribution in cultured cells; and thymidylate chromophore destruction by water free radicals
Zheng, Nan; Lian, Bin; Du, Wenwen; Xu, Guobing; Ji, Jiafu
2018-01-01
Paclitaxel-loaded polymeric micelles (PTX-PM) are commonly used as tumor-targeted nanocarriers and display outstanding antitumor features in clinic, but its accumulation and distribution in vitro are lack of investigation. It is probably due to the complex micellar system and its low concentration at the cellular or subcellular levels. In this study, we developed an improved extraction method, which was a combination of mechanical disruption and liquid-liquid extraction (LLE), to extract the total PTX from micelles in the cell lysate and subcellular compartments. An ultra-performance liquid chromatography tandem mass spectroscopy (UPLC-MS/MS) method was optimized to detect the low concentration of PTX at cellular and subcellular levels simultaneously, using docetaxel as internal standard (IS). The method was proved to release PTX totally from micelles (≥95.93%) with a consistent and reproducible extraction recovery (≥75.04%). Good linearity was obtained at concentrations ranging from 0.2 to 20ng/mL. The relative error (RE%) for accuracy varied from 0.68 to 7.56%, and the intra- and inter-precision (relative standard deviation, RSD%) was less than 8.64% and 13.14%, respectively. This method was fully validated and successfully applied to the cellular uptake and distribution study of PTX-loaded PLGA-PEG micelles in human breast cancer cells (MCF-7). Copyright © 2017 Elsevier B.V. All rights reserved.
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Directory of Open Access Journals (Sweden)
Duan YR
2012-07-01
Full Text Available Peihao Yin,1,* Yan Wang,1,* YanYan Qiu,1 LiLi Hou,1 Xuan Liu,1 Jianmin Qin,1 Yourong Duan,2 Peifeng Liu,2 Ming Qiu,3 Qi Li11Department of Clinical Oncology, Putuo Hospital and Interventional Cancer Institute of Integrative Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China; 2Shanghai Cancer Institute, Jiaotong University, Shanghai, China; 3Department of General Surgery, Changzheng Hospital, Second Military Medical University, Shanghai, China *These authors contributed equally to this workBackground: Recent studies have shown that bufalin has a good antitumor effect but has high toxicity, poor water solubility, a short half-life, a narrow therapeutic window, and a toxic dose that is close to the therapeutic dose, which all limit its clinical application. This study aimed to determine the targeting efficacy of nanoparticles (NPs made of methoxy polyethylene glycol (mPEG, polylactic-co-glycolic acid (PLGA, poly-L-lysine (PLL, and cyclic arginine-glycine-aspartic acid (cRGD loaded with bufalin, ie, bufalin-loaded mPEG-PLGA-PLL-cRGD nanoparticles (BNPs, in SW620 colon cancer-bearing mice.Methods: BNPs showed uniform size. The size, shape, zeta potential, drug loading, encapsulation efficiency, and release of these nanoparticles were studied in vitro. The tumor targeting, cellular uptake, and growth-inhibitory effect of BNPs in vivo were tested.Results: BNPs were of uniform size with an average particle size of 164 ± 84 nm and zeta potential of 2.77 mV. The encapsulation efficiency was 81.7% ± 0.89%, and the drug load was 3.92% ± 0.16%. The results of in vitro cytotoxicity studies showed that although the blank NPs were nontoxic, they enhanced the cytotoxicity of bufalin in BNPs. Drug release experiments showed that the release of the drug was prolonged and sustained. The results of confocal laser scanning microscopy indicated that BNPs could effectively bind to human umbilical vein endothelial cells. In the SW620
Energy Technology Data Exchange (ETDEWEB)
Ben Abdeljelil, Nawel; Rochette, Pierre-Alexandre; Pearson, Angela, E-mail: angela.pearson@iaf.inrs.ca
2013-09-15
Mutations in UL24 of herpes simplex virus type 1 can lead to a syncytial phenotype. We hypothesized that UL24 affects the sub-cellular distribution of viral glycoproteins involved in fusion. In non-immortalized human foreskin fibroblasts (HFFs) we detected viral glycoproteins B (gB), gD, gH and gL present in extended blotches throughout the cytoplasm with limited nuclear membrane staining; however, in HFFs infected with a UL24-deficient virus (UL24X), staining for the viral glycoproteins appeared as long, thin streaks running across the cell. Interestingly, there was a decrease in co-localized staining of gB and gD with F-actin at late times in UL24X-infected HFFs. Treatment with chemical agents that perturbed the actin cytoskeleton hindered the formation of UL24X-induced syncytia in these cells. These data support a model whereby the UL24 syncytial phenotype results from a mislocalization of viral glycoproteins late in infection. - Highlights: • UL24 affects the sub-cellular distribution of viral glycoproteins required for fusion. • Sub-cellular distribution of viral glycoproteins varies in cell-type dependent manner. • Drugs targeting actin microfilaments affect formation of UL24-related syncytia in HFFs.
DEFF Research Database (Denmark)
Andersen, U O; Bøg-Hansen, T C; Kirkeby, S
1996-01-01
A histochemical avidin-biotin technique with three different alpha 1-acid glycoprotein glycoforms showed pronounced alterations in the cellular localization of two alpha 1-acid glycoprotein lectin-like receptors during cell differentiation in the developing rat testis. The binding of alpha 1-acid...
Grcev, L.; Deursen, van A.P.J.; Waes, van J.B.M.
2003-01-01
Cellular phone base stations are often placed in the poles of power transmission lines. We consider the case when such base stations are powered from the low-voltage network. Of special concern is the current that might be led through the cable metallic shields to other customers' premises in case
Acedo, L.; Villanueva-Oller, J.; Moraño, J. A.; Villanueva, R.-J.
2013-01-01
The Berkeley Open Infrastructure for Network Computing (BOINC) has become the standard open source solution for grid computing in the Internet. Volunteers use their computers to complete an small part of the task assigned by a dedicated server. We have developed a BOINC project called Neurona@Home whose objective is to simulate a cellular automata random network with, at least, one million neurons. We consider a cellular automata version of the integrate-and-fire model in which excitatory and inhibitory nodes can activate or deactivate neighbor nodes according to a set of probabilistic rules. Our aim is to determine the phase diagram of the model and its behaviour and to compare it with the electroencephalographic signals measured in real brains.
Sanchez-Moreno, M; Ortega, J E; Valero, A
1989-12-01
High levels of malate dehydrogenase were found in Trichuris ovis. Two molecular forms of the enzyme, of different cellular location and electrophoretic pattern, were isolated and purified. The activity of soluble malate dehydrogenase was greater than that of mitochondrial malate dehydrogenase. Both forms also displayed different electrophoretic profiles in comparison with purified extracts from goat (Capra hircus) liver. Substrate concentration directly affected enzyme activity. Host and parasite malate dehydrogenase activity were both inhibited by a series of benzimidazoles and pyrimidine-derived compounds, some of which markedly reduced parasite enzyme activity, but not host enzyme activity. Percentage inhibition by some pyrimidine derivatives was greater than that produced by benzimidazoles.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Maekawa, Masashi; Yang, Yanbo; Fairn, Gregory D
2016-03-08
Cholesterol is an essential structural component of cellular membranes in eukaryotes. Cholesterol in the exofacial leaflet of the plasma membrane is thought to form membrane nanodomains with sphingolipids and specific proteins. Additionally, cholesterol is found in the intracellular membranes of endosomes and has crucial functions in membrane trafficking. Furthermore, cellular cholesterol homeostasis and regulation of de novo synthesis rely on transport via both vesicular and non-vesicular pathways. Thus, the ability to visualize and detect intracellular cholesterol, especially in the plasma membrane, is critical to understanding the complex biology associated with cholesterol and the nanodomains. Perfringolysin O (PFO) theta toxin is one of the toxins secreted by the anaerobic bacteria Clostridium perfringens and this toxin forms pores in the plasma membrane that causes cell lysis. It is well understood that PFO recognizes and binds to cholesterol in the exofacial leaflets of the plasma membrane, and domain 4 of PFO (D4) is sufficient for the binding of cholesterol. Recent studies have taken advantage of this high-affinity cholesterol-binding domain to create a variety of cholesterol biosensors by using a non-toxic PFO or the D4 in isolation. This review highlights the characteristics and usefulness of, and the principal findings related to, these PFO-derived cholesterol biosensors.
Zhang, Zhongkai; Zheng, Kuanyu; Dong, Jiahong; Fang, Qi; Hong, Jian; Wang, Xifeng
2016-01-19
Tomato spotted wilt virus (TSWV) and Tomato zonate spot virus (TZSV) are the two dominant species of thrip-transmitted tospoviruses, cause significant losses in crop yield in Yunnan and its neighboring provinces in China. TSWV and TZSV belong to different serogroup of tospoviruses but induce similar symptoms in the same host plant species, which makes diagnostic difficult. We used different electron microscopy preparing methods to investigate clustering and cellular distribution of TSWV and TZSV in the host plant species. Negative staining of samples infected with TSWV and TZSV revealed that particles usually clustered in the vesicles, including single particle (SP), double particles clustering (DPC), triple particles clustering (TPC). In the immunogold labeling negative staining against proteins of TZSV, the antibodies against Gn protein were stained more strongly than the N protein. Ultrathin section and high pressure freeze (HPF)-electron microscopy preparations revealed that TSWV particles were distributed in the cisternae of endoplasmic reticulum (ER), filamentous inclusions (FI) and Golgi bodies in the mesophyll cells. The TSWV particles clustered as multiple particles clustering (MPC) and distributed in globular viroplasm or cisternae of ER in the top leaf cell. TZSV particles were distributed more abundantly in the swollen membrane of ER in the mesophyll cell than those in the phloem parenchyma cells and were not observed in the top leaf cell. However, TZSV virions were mainly present as single particle in the cytoplasm, with few clustering as MPC. In this study, we identified TSWV and TZSV particles had the distinct cellular distribution patterns in the cytoplasm from different tissues and host plants. This is the first report of specific clustering characteristics of tospoviruses particles as well as the cellular distribution of TSWV particles in the FI and globular viroplasm where as TZSV particles inside the membrane of ER. These results indicated that
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Li, Jing; Xiang, Cong-Ying; Yang, Jian; Chen, Jian-Ping; Zhang, Heng-Mu
2015-09-11
Small heat shock proteins (sHSPs) perform a fundamental role in protecting cells against a wide array of stresses but their biological function during viral infection remains unknown. Rice stripe virus (RSV) causes a severe disease of rice in Eastern Asia. OsHSP20 and its homologue (NbHSP20) were used as baits in yeast two-hybrid (YTH) assays to screen an RSV cDNA library and were found to interact with the viral RNA-dependent RNA polymerase (RdRp) of RSV. Interactions were confirmed by pull-down and BiFC assays. Further analysis showed that the N-terminus (residues 1-296) of the RdRp was crucial for the interaction between the HSP20s and viral RdRp and responsible for the alteration of the sub-cellular localization and distribution pattern of HSP20s in protoplasts of rice and epidermal cells of Nicotiana benthamiana. This is the first report that a plant virus or a viral protein alters the expression pattern or sub-cellular distribution of sHSPs.
Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M.
2016-01-01
Antibody drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from non-specific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody drug conjugate Kadcyla in HER2 positive mouse xenografts. This model is able to capture the impact of the drug antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs. PMID:27287046
International Nuclear Information System (INIS)
Kraemer, Lisa D.; Campbell, Peter G.C.; Hare, Landis
2006-01-01
perch with high hepatic metal concentrations. - In fish from metal-contaminated sites, seasonal variations in hepatic Cd and Cu concentrations were greater than in fish from reference sites, and homeostatic control of sub-cellular metal distribution was compromised
Moore, R.; McClelen, C. E.
1985-01-01
In calyptrogen cells of Zea mays, proplastids are distributed randomly throughout the cell, and the endoplasmic reticulum (ER) is distributed parallel to the cell walls. The differentiation of calyptrogen cells into columella statocytes is characterized by the following sequential events: (1) formation of ER complexes at the distal and proximal ends of the cell, (2) differentiation of proplastids into amyloplasts, (3) sedimentation of amyloplasts onto the distal ER complex, (4) breakdown of the distal ER complex and sedimentation of amyloplasts to the bottom of the cell, and (5) formation of sheets of ER parallel to the longitudinal cell walls. Columella statocytes located in the centre of the cap each possess 4530 +/- 780 micrometers2 of ER surface area, an increase of 670 per cent over that of calyptrogen cells. The differentiation of peripheral cells correlates positively with (1) the ER becoming arranged in concentric sheets, (2) amyloplasts and ER becoming randomly distributed, and (3) a 280 per cent increase in ER surface area over that of columella statocytes. These results are discussed relative to graviperception and mucilage secretion, which are functions of columella and peripheral cells, respectively.
Energy Technology Data Exchange (ETDEWEB)
Li, Zhu [Key Laboratory of Soil Environment and Pollution Remediation, Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008 (China); University of the Chinese Academy of Sciences, Beijing 100049 (China); Wu, Longhua, E-mail: lhwu@issas.ac.cn [Key Laboratory of Soil Environment and Pollution Remediation, Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008 (China); Hu, Pengjie [Key Laboratory of Soil Environment and Pollution Remediation, Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008 (China); Luo, Yongming [Key Laboratory of Soil Environment and Pollution Remediation, Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008 (China); Yantai Institute of Coastal Zone Research, Yantai 264003 (China); Christie, Peter [Agri-Environment Branch, Agri-Food and Biosciences Institute, Newforge Lane, Belfast BT9 5PX (United Kingdom)
2013-10-15
Highlights: • Low Cu has no significant effect on Sedum plumbizincicola plant growth and Cd and Zn uptake. • Plant held Cu in unactive areas and insoluble forms as de-toxification mechanisms. • Influence of Cu on Zn and Cd uptake and translocation were different. • Cu accumulation in leaf veins may restrain Cd/Zn unloading to the leaves -- Abstract: Non-accumulated metals in mixed metal contaminated soils may affect hyperaccumulator growth and metal accumulation and thus remediation efficiency. Two hydroponics experiments were conducted to investigate the effects of copper (Cu) on cadmium (Cd) and zinc (Zn) accumulation by the Cd/Zn hyperaccumulator Sedum plumbizincicola, Cu toxicity and plant detoxification using chemical sequential extraction of metals, sub-cellular separation, micro synchrotron radiation based X-ray fluorescence, and transmission electron microscopy. Compared with the control (0.31 μM Cu), 5–50 μM Cu had no significant effect on Cd/Zn accumulation, but Cu at 200 μM induced root cell plasmolysis and disordered chloroplast structure. The plants held Cu in the roots and cell walls and complexed Cu in insoluble forms as their main detoxification mechanisms. Exposure to 200 μM Cu for 4 days inhibited plant Cd uptake and translocation but did not affect Zn concentrations in roots and stems. Moreover, unloading of Cd and Zn from stem to leaf was restrained compared to control plants, perhaps due to Cu accumulation in leaf veins. Copper may thus interfere with root Cd uptake and restrain Cd/Zn unloading to the leaves. Further investigation of how Cu affects plant metal uptake may help elucidate the Cd/Zn hyper-accumulating mechanisms of S. plumbizincicola.
International Nuclear Information System (INIS)
Li, Zhu; Wu, Longhua; Hu, Pengjie; Luo, Yongming; Christie, Peter
2013-01-01
Highlights: • Low Cu has no significant effect on Sedum plumbizincicola plant growth and Cd and Zn uptake. • Plant held Cu in unactive areas and insoluble forms as de-toxification mechanisms. • Influence of Cu on Zn and Cd uptake and translocation were different. • Cu accumulation in leaf veins may restrain Cd/Zn unloading to the leaves -- Abstract: Non-accumulated metals in mixed metal contaminated soils may affect hyperaccumulator growth and metal accumulation and thus remediation efficiency. Two hydroponics experiments were conducted to investigate the effects of copper (Cu) on cadmium (Cd) and zinc (Zn) accumulation by the Cd/Zn hyperaccumulator Sedum plumbizincicola, Cu toxicity and plant detoxification using chemical sequential extraction of metals, sub-cellular separation, micro synchrotron radiation based X-ray fluorescence, and transmission electron microscopy. Compared with the control (0.31 μM Cu), 5–50 μM Cu had no significant effect on Cd/Zn accumulation, but Cu at 200 μM induced root cell plasmolysis and disordered chloroplast structure. The plants held Cu in the roots and cell walls and complexed Cu in insoluble forms as their main detoxification mechanisms. Exposure to 200 μM Cu for 4 days inhibited plant Cd uptake and translocation but did not affect Zn concentrations in roots and stems. Moreover, unloading of Cd and Zn from stem to leaf was restrained compared to control plants, perhaps due to Cu accumulation in leaf veins. Copper may thus interfere with root Cd uptake and restrain Cd/Zn unloading to the leaves. Further investigation of how Cu affects plant metal uptake may help elucidate the Cd/Zn hyper-accumulating mechanisms of S. plumbizincicola
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Zhi, Huanhuan; Liu, Qiqi; Xu, Juan; Dong, Yu; Liu, Mengpei; Zong, Wei
2017-12-01
Ultrasound has been applied in fruit pre-washing processes. However, it is not sufficient to protect fruit from pathogenic infection throughout the entire storage period, and sometimes ultrasound causes tissue damage. The goal of this study was to investigate the effects of calcium chloride (CaCl 2 , 10 g L -1 ) and ultrasound (350 W at 40 kHz), separately and in combination, on jujube fruit quality, antioxidant status, tissue Ca 2+ content and distribution along with cell wall metabolism at 20 °C for 6 days. All three treatments significantly maintained fruit firmness and peel color, reduced respiration rate, decay incidence, superoxide anion, hydrogen peroxide and malondialdehyde and preserved higher enzymatic (superoxide dismutase, catalase and peroxidase) and non-enzymatic (ascorbic acid and glutathione) antioxidants compared with the control. Moreover, the combined treatment was more effective in increasing tissue Ca 2+ content and distribution, inhibiting the generation of water-soluble and CDTA-soluble pectin fractions, delaying the solubilization of Na 2 CO 3 -soluble pectin and having lower activities of cell wall-modifying enzymes (polygalacturonase and pectate lyase) during storage. These results demonstrated that the combination of CaCl 2 and ultrasound has potential commercial application to extend the shelf life of jujube fruit by facilitating Ca 2+ absorption and stabilizing the cell wall structure. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Pattanakuhar, Sintip; Pongchaidecha, Anchalee; Chattipakorn, Nipon; Chattipakorn, Siriporn C
Skeletal muscles play important roles in metabolism, energy expenditure, physical strength, and locomotive activity. Skeletal muscle fibre types in the body are heterogeneous. They can be classified as oxidative types and glycolytic types with oxidative-type are fatigue-resistant and use oxidative metabolism, while fibres with glycolytic-type are fatigue-sensitive and prefer glycolytic metabolism. Several studies demonstrated that an obese condition with abnormal metabolic parameters has been negatively correlated with the distribution of oxidative-type skeletal muscle fibres, but positively associated with that of glycolytic-type muscle fibres. However, some studies demonstrated otherwise. In addition, several studies demonstrated that an exercise training programme caused the redistribution of oxidative-type skeletal muscle fibres in obesity. In contrast, some studies showed inconsistent findings. Therefore, the present review comprehensively summarizes and discusses those consistent and inconsistent findings from clinical studies, regarding the association among the distribution of skeletal muscle fibre types, obese condition, and exercise training programmes. Furthermore, the possible underlying mechanisms and clinical application of the alterations in muscle fibre type following obesity are presented and discussed. Copyright © 2016 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Gawryszewska, Iwona; Malinowska, Katarzyna; Kuch, Alicja; Chrobak-Chmiel, Dorota; Trokenheim, Lucja Laniewska-; Hryniewicz, Waleria; Sadowy, Ewa
2017-03-01
Enterococcus faecalis represents an important factor of hospital-associated infections (HAIs). The knowledge on its evolution from a commensal to an opportunistic pathogen is still limited; thus, we performed a study to characterise distribution of factors that may contribute to this adaptation. Using a collection obtained from various settings (hospitalised patients, community carriers, animals, fresh food, sewage, water), we investigated differences in antimicrobial susceptibility, distribution of antimicrobial resistance genes, virulence-associated determinants and phenotypes, and CRISPR loci in the context of the clonal relatedness of isolates. Bayesian Analysis of Population Structure revealed the presence of three major groups; two subgroups comprised almost exclusively HAI isolates, belonging to previously proposed enterococcal high-risk clonal complexes (HiRECCs) 6 and 28. Isolates of these two subgroups were significantly enriched in antimicrobial resistance genes, presumably produced a polysaccharide capsule and often carried the aggregation substance asa1; distribution of other virulence-associated genes, such as esp and cyl, formation of a biofilm and gelatinase production were more variable. Moreover, both subgroups showed a low prevalence of CRISPR-Cas 1 and 3 and presence of small CRISPR2 variants. Our study confirms the importance of HiRECCs in the population of E. faecalis and their confinement to the hospital settings. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Nishina-Uchida, Noriko; Fukuzawa, Ryuji; Ishii, Tomohiro; Anaka, Matthew R; Hasegawa, Tomonobu; Hasegawa, Yukihiro
2016-01-01
Individuals with a 46,XX/46,XY karyotype are categorized as ovotesticular disorder of sexual development (ODSD) and have gonads with either an ovary on one side and a testis on the other side or a mixed ovotestis. To examine the distribution of 46,XX and 46,XY cells in gonads of 3 patients with ODSD, FISH for X and Y chromosomes and immunohistochemistry for SOX9 and FOXL2 were carried out. FISH analysis showed that XX signals were present in Sertoli cells in the seminiferous tubules, while cells containing Y signals were seen in epithelia of ovarian follicles. The immunolabeling of SOX9 and FOXL2 in the seminiferous tubules and ovarian follicles was mutually exclusive, irrespective of the presence of reversed sex chromosomes. We therefore suggest that the fate of individual gonadal epithelial cells is determined not only by the sex chromosomes but also by local environmental factors. © 2016 S. Karger AG, Basel.
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Directory of Open Access Journals (Sweden)
Ding X
2012-02-01
homogeneous distribution in the cytoplasm; those with a lower hydrophilic-lipophilic balance value prefer to localize in the nucleus.Conclusion: This Pluronic-polyethyleneimine system may be worth exploring as components in the cationic copolymers as the DNA or small interfering RNA/microRNA delivery system in the near future.Keywords: Pluronics, gene transfer, nonviral vectors, transfection efficiency, cellular uptake
International Nuclear Information System (INIS)
Pshenin, G.N.; Steklenkov, A.P.; Varushchenko, A.N.
1991-01-01
The analysis of space time distribution of ancient organogenous material is carried out through generalization of practically all available at the present time data on radiocarbon dating of Upper-Pleistocene and Holocene sediments in the Middle Asia. The investigations were performed to study the variability of humidification over the specific territory of the Middle Asia within a determined period of time. Three rather clearly limited vertical height intervals are determined by the results of the isotope dating of wood, coal, peat and mollus samples
Kuznetsov, A. N.; Fedorov, Yu A.; Yaroslavtsev, V. M.
2018-01-01
The study of pollutants vertical distribution in seabed sediments is of high interest as they conserve the information on the chronology of pollution level in the past. In the present paper, the results of layer by layer study of Cs-137, Am-241, Pb-210 specific activities as well as concentrations of petroleum components, lead and mercury in 48 sediment cores of the Sea of Azov, the Don River and the Kuban River are examined. In most sediment cores, two peaks of Cs-137 and Am-241 are detected. The upper of them was formed due to the Chernobyl accident in 1986 and the other is related to the global nuclear fallout of 1960s. The specific activity of naturally occurring atmospheric Lead-210 decreases exponentially with the sediment core depth. However, it is influenced by fluvial run-off, coastal erosion, Radium-226 and Radon-222 decay. The data on the radionuclides distribution in the seabed sediments is used to date them. According to the results of dating, most of petroleum components, lead and mercury quantities are concentrated in the upper sediment layer formed in the last 50 to 70 years i.e. in the period of the most important anthropogenic pressure.
International Nuclear Information System (INIS)
Farkas, A.; Balashazy, I.; Szoeke, I.
2003-01-01
The general objective of our research is modelling the biophysical processes of the effects of inhaled radon progenies. This effort is related to the rejection or support of the linear no threshold (LNT) dose-effect hypothesis, which seems to be one of the most challenging tasks of current radiation protection. Our approximation and results may also serve as a useful tool for lung cancer models. In this study, deposition patterns, activity distributions and alpha-hit probabilities of inhaled radon progenies in the large airways of the human tracheobronchial tree are computed. The airflow fields and related particle deposition patterns strongly depend on the shape of airway geometry and breathing pattern. Computed deposition patterns of attached an unattached radon progenies are strongly inhomogeneous creating hot spots at the carinal regions and downstream of the inner sides of the daughter airways. The results suggest that in the vicinity of the carinal regions the multiple hit probabilities are quite high even at low average doses and increase exponentially in the low-dose range. Thus, even the so-called low doses may present high doses for large clusters of cells. The cell transformation probabilities are much higher in these regions and this phenomenon cannot be modeled with average burdens. (authors)
F.C. Gruau; J.T. Tromp (John)
1999-01-01
textabstractWe consider the problem of establishing gravity in cellular automata. In particular, when cellular automata states can be partitioned into empty, particle, and wall types, with the latter enclosing rectangular areas, we desire rules that will make the particles fall down and pile up on
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
International Nuclear Information System (INIS)
Humm, J.L.; Chin, L.M.
1989-01-01
Radiation dose is a useful predictive parameter for describing radiation toxicity in conventional radiotherapy. Traditionally, in vitro radiation biology dose-effect relations are expressed in the form of cell survival curves, a semilog plot of cell survival versus dose. However, the characteristic linear or linear quadratic survival curve shape, for high- and low-LET radiations respectively, is only strictly valid when the radiation dose is uniform across the entire target population. With an external beam of 60 Co gamma rays or x-rays, a uniform field may be readily achievable. When radionuclides are incorporated into a cell milieu, several new problems emerge which can result in a departure from uniformity in energy deposition throughout a cell population. This nonuniformity can have very important consequences for the shape of the survival curve. Cases in which perturbations of source uniformity may arise include: 1. Elemental sources may equilibrate in the cell medium with partition coefficients between the extracellular, cytosol, and nuclear compartments. The effect of preferential cell internalization or binding to cell membrane of some radionuclides can increase or decrease the slope of the survival curve. 2. Radionuclides bound to antibodies, hormones, metabolite precursors, etc., may result in a source localization pattern characteristic of the carrier agent, i.e., the sources may bind to cell surface receptors or antigens, be internalized, bind to secreted antigen concentrated around a fraction of the cell population, or become directly incorporated into the cell DNA. We propose to relate the distribution of energy deposition in cell nuclei to biological correlates of cellular inactivation. The probability of each cell's survival is weighted by its individual radiation burden, and the summation of these probabilities for the cell population can be used to predict the number or fraction of cell survivors
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Accretion onto some well-known regular black holes
International Nuclear Information System (INIS)
Jawad, Abdul; Shahzad, M.U.
2016-01-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-03-15
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
Chen, Mao Xiang; Gorman, Shelby A; Benson, Bill; Singh, Kuljit; Hieble, J Paul; Michel, Martin C; Tate, Simon N; Trezise, Derek J
2004-06-01
The SK/IK family of small and intermediate conductance calcium-activated potassium channels contains four members, SK1, SK2, SK3 and IK1, and is important for the regulation of a variety of neuronal and non-neuronal functions. In this study we have analysed the distribution of these channels in human tissues and their cellular localisation in samples of colon and corpus cavernosum. SK1 mRNA was detected almost exclusively in neuronal tissues. SK2 mRNA distribution was restricted but more widespread than SK1, and was detected in adrenal gland, brain, prostate, bladder, liver and heart. SK3 mRNA was detected in almost every tissue examined. It was highly expressed in brain and in smooth muscle-rich tissues including the clitoris and the corpus cavernosum, and expression in the corpus cavernosum was upregulated up to 5-fold in patients undergoing sex-change operations. IK1 mRNA was present in surface-rich, secretory and inflammatory cell-rich tissues, highest in the trachea, prostate, placenta and salivary glands. In detailed immunohistochemical studies of the colon and the corpus cavernosum, SK1-like immunoreactivity was observed in the enteric neurons. SK3-like immunoreactivity was observed strongly in smooth muscle and vascular endothelium. IK1-like immunoreactivity was mainly observed in inflammatory cells and enteric neurons of the colon, but absent in corpus cavernosum. These distinctive patterns of distribution suggest that these channels are likely to have different biological functions and could be specifically targeted for a number of human diseases, such as irritable bowel syndrome, hypertension and erectile dysfunction.
On the regularized fermionic projector of the vacuum
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
On the regularized fermionic projector of the vacuum
International Nuclear Information System (INIS)
Finster, Felix
2008-01-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed
Sabbioni, Enrico; Fortaner, Salvador; Farina, Massimo; Del Torchio, Riccardo; Petrarca, Claudia; Bernardini, Giovanni; Mariani-Costantini, Renato; Perconti, Silvia; Di Giampaolo, Luca; Gornati, Rosalba; Di Gioacchino, Mario
2014-02-01
The mechanistic understanding of nanotoxicity requires the physico-chemical characterisation of nanoparticles (NP), and their comparative investigation relative to the corresponding ions and microparticles (MP). Following this approach, the authors studied the dissolution, interaction with medium components, bioavailability in culture medium, uptake and intracellular distribution of radiolabelled Co forms (CoNP, CoMP and Co(2+)) in Balb/3T3 mouse fibroblasts. Co(2+) first saturates the binding sites of molecules in the extracellular milieu (e.g., albumin and histidine) and on the cell surface. Only after saturation, Co(2+) is actively uptaken. CoNP, instead, are predicted to be internalised by endocytosis. Dissolution of Co particles allows the formation of Co compounds (CoNP-rel), whose mechanism of cellular internalisation is unknown. Co uptake (ranking CoMP > CoNP > Co(2+)) reached maximum at 4 h. Once inside the cell, CoNP spread into the cytosol and organelles. Consequently, massive amounts of Co ions and CoNP-rel can reach subcellular compartments normally unexposed to Co(2+). This could explain the fact that the nuclear and mitochondrial Co concentrations resulted significantly higher than those obtained with Co(2+).
Rajapakse, Dinusha; Peterson, Katherine; Mishra, Sanghamitra; Wistow, Graeme
2017-12-15
Retinal pigment epithelium (RPE) has been implicated as key source of cholesterol-rich deposits at Bruch's membrane (BrM) and in drusen in aging human eye. We have shown that serum-deprivation of confluent RPE cells is associated with upregulation of cholesterol synthesis and accumulation of unesterified cholesterol (UC). Here we investigate the cellular processes involved in this response. We compared the distribution and localization of UC and esterified cholesterol (EC); the age-related macular degeneration (AMD) associated EFEMP1/Fibulin3 (Fib3); and levels of acyl-coenzyme A (CoA): cholesterol acyltransferases (ACAT) ACAT1, ACAT2 and Apolipoprotein B (ApoB) in ARPE-19 cells cultured in serum-supplemented and serum-free media. The results were compared with distributions of these lipids and proteins in human donor eyes with AMD. Serum deprivation of ARPE-19 was associated with increased formation of FM dye-positive membrane vesicles, many of which co-labeled for UC. Additionally, UC colocalized with Fib3 in distinct granules. By day 5, serum-deprived cells grown on transwells secreted Fib3 basally into the matrix. While mRNA and protein levels of ACTA1 were constant over several days of serum-deprivation, ACAT2 levels increased significantly after serum-deprivation, suggesting increased formation of EC. The lower levels of intracellular EC observed under serum-deprivation were associated with increased formation and secretion of ApoB. The responses to serum-deprivation in RPE-derived cells: accumulation and secretion of lipids, lipoproteins, and Fib3 are very similar to patterns seen in human donor eyes with AMD and suggest that this model mimics processes relevant to disease progression. Published by Elsevier Inc.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Outer-totalistic cellular automata on graphs
International Nuclear Information System (INIS)
Marr, Carsten; Huett, Marc-Thorsten
2009-01-01
We present an intuitive formalism for implementing cellular automata on arbitrary topologies. By that means, we identify a symmetry operation in the class of elementary cellular automata. Moreover, we determine the subset of topologically sensitive elementary cellular automata and find that the overall number of complex patterns decreases under increasing neighborhood size in regular graphs. As exemplary applications, we apply the formalism to complex networks and compare the potential of scale-free graphs and metabolic networks to generate complex dynamics
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
Using Regularization to Infer Cell Line Specificity in Logical Network Models of Signaling Pathways
Directory of Open Access Journals (Sweden)
Sébastien De Landtsheer
2018-05-01
Full Text Available Understanding the functional properties of cells of different origins is a long-standing challenge of personalized medicine. Especially in cancer, the high heterogeneity observed in patients slows down the development of effective cures. The molecular differences between cell types or between healthy and diseased cellular states are usually determined by the wiring of regulatory networks. Understanding these molecular and cellular differences at the systems level would improve patient stratification and facilitate the design of rational intervention strategies. Models of cellular regulatory networks frequently make weak assumptions about the distribution of model parameters across cell types or patients. These assumptions are usually expressed in the form of regularization of the objective function of the optimization problem. We propose a new method of regularization for network models of signaling pathways based on the local density of the inferred parameter values within the parameter space. Our method reduces the complexity of models by creating groups of cell line-specific parameters which can then be optimized together. We demonstrate the use of our method by recovering the correct topology and inferring accurate values of the parameters of a small synthetic model. To show the value of our method in a realistic setting, we re-analyze a recently published phosphoproteomic dataset from a panel of 14 colon cancer cell lines. We conclude that our method efficiently reduces model complexity and helps recovering context-specific regulatory information.
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Predictability in cellular automata.
Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius
2014-01-01
Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case.
Accreting fluids onto regular black holes via Hamiltonian approach
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)
2017-08-15
We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Probabilistic cellular automata.
Agapie, Alexandru; Andreica, Anca; Giuclea, Marius
2014-09-01
Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.
International Nuclear Information System (INIS)
Calvo, W.; Fliedner, T.M.; Herbst, E.; Huegl, E.; Bruch, C.
1976-01-01
Dogs were given transfusions of cryopreserved autologous mononuclear blood leukocytes after 1200 roentgens (R) (midline dose) whole-body x-irradiation. Bone marrow repopulation was studied by means of histomorphological methods at days 9 and 10 after transfusion of an average of 3 x 10 9 , 7 x 10 9 , 13 x 10 9 , and 31 x 10 9 cells. The return of marrow cellularity to normal values was related to the number of cells transfused. With low cell doses (3 x 10 9 and 7 x 10 9 ), the marrow regeneration at 10 days was focal. There were groups of cells (colonies) showing either erythropoiesis, myelopoiesis, or megakaryocytopoiesis in the osteal niches of the trabecular bones. Frequently such niches were seen showing complete cellular recovery next to niches with complete aplasia. With higher cell doses, all niches showed hemopoietic regeneration, and the cellularity approached normal values. No hemopoietic regeneration was observed in those skeletal parts that do not show hemopoiesis, even under normal circumstances
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
The Role of Instabilities on the Mechanical Response of Cellular Solids and Structures
National Research Council Canada - National Science Library
Kyriakides, S
1997-01-01
.... The relatively regular and periodic microstructure of these two-dimensional materials makes them excellent models for studying the mechanisms that govern the compressive response of cellular materials...
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Directory of Open Access Journals (Sweden)
Magnólia Fernandes Florêncio de Araújo
2008-02-01
Full Text Available The temporal and spatial fluctuations of Bacterioplankton in a fluvial-lagunar system of a tropical region (Pitimbu River and Jiqui Lake, RN were studied during the dry and the rainy periods. The bacterial abundance varied from 2.67 to 5.1 Cells10(7mL-1 and did not show a typical temporal variation, presenting only small oscillations between the rainy and the dry periods. The bacterial biomass varied from 123 µgC L-1 to 269 µgC L-1 in the sampling sites and the average cellular volume varied from 0.12 to 0.54µm³, showing a predominance of the rods. The temperature showed a positive correlation with the cellular volume of the rods (R=0.55; p=0.02 and vibrio (R=0.53; p=0.03. Significant spatial differences of biomass (Mann Whitney: p=0.01 and cellular volume of the morphotypes (Mann Whitney: p=0.003 were found between the sampling sites. The strong positive correlations of the water temperature and oxygen with bacterioplankton showed a probable high bacterial activity in this system.A variação temporal e espacial do bacterioplâncton em um sistema fluvial-lagunar de região tropical foi estudada em períodos seco e chuvoso. As médias da abundância bacteriana variaram de 2,67 a 5,1 x 10(7 e não exibiram uma variação temporal marcante, tendo apresentado apenas pequenas oscilações entre os períodos chuvoso e seco. A biomassa bacteriana variou de 123 µg C L-1 a 269 µg C L-1 entre os locais de coleta e o volume celular médio de 0,12µm³ a 0,54µm³, ocorrendo predominância de bacilos. A temperatura mostrou correlação positiva com o volume celular de bacilos (R=0,55; p=0,02 e de vibriões (R=0,53; p=0,03. Foram encontradas diferenças espaciais significativas de biomassa (Mann Whitney: p=0,01 e volume celular dos morfotipos (Mann Whitney: p= 0,003, entre os locais de coleta. As fortes correlações positivas da temperatura da água e do oxigênio, com o bacterioplâncton, são sugestivas de uma provavelmente elevada atividade
Aronica, E.; Gorter, J. A.; Jansen, G. H.; van Veelen, C. W. M.; van Rijen, P. C.; Leenstra, S.; Ramkema, M.; Scheffer, G. L.; Scheper, R. J.; Troost, D.
2003-01-01
The cell-specific distribution of multidrug resistance extrusion pumps was studied in developmental glioneuronal lesions, including focal cortical dysplasia (15 cases) and ganglioglioma (15 cases) from patients with medically intractable epilepsy. Lesional, perilesional, as well as normal brain
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
International Nuclear Information System (INIS)
Perceval, Olivier; Couillard, Yves; Pinel-Alloul, Bernadette; Giguere, Anik; Campbell, Peter G.C.
2004-01-01
The use of biomarkers to assess the impacts of contaminants on aquatic ecosystems has noticeably increased over the past few years. Few of these studies, however, have contributed to the prediction of ecologically significant effects (i.e., at the population or community levels). The present field study was designed to evaluate the potential of metallothionein (MT) and sub-cellular metal partitioning measurements for predicting toxic effects at higher levels of the biological organization in freshwater bivalves (Pyganodon grandis) chronically exposed to Cd. For that purpose, we quantitatively sampled P. grandis populations in the littoral zone of nine lakes on the Precambrian Canadian Shield during two consecutive summers (1998 and 1999); lakes were characterized by contrasting Cd levels but similar trophic status. We tested relationships between the population status of P. grandis (i.e., growth parameters, density, biomass, secondary production, turnover ratio and cumulative fecundity) and (i) ambient Cd concentrations, (ii) sub-organismal responses (MT concentrations in the gill cytosol of individuals and Cd concentrations in three metal-ligand pools identified as M-HMW, the high molecular weight pool, M-MT, the metallothionein-like pool and M-LMW, the low molecular weight pool) and (iii) ecological confounding factors (food resources, presence of host fishes for the obligatory parasitic larval stage of P. grandis). Our results show that littoral density, live weight, dry viscera biomass, production and cumulative fecundity decreased with increasing concentrations of the free-cadmium ion in the environment (Pearson's r ranging from -0.63 to -0.78). On the other hand, theoretical maximum shell lengths (L ∞ ) in our populations were related to both the dissolved Ca concentration and food quality (sestonic C and N concentrations). Overall, Cd concentrations in the gill cytosolic HMW pool of the individual molluscs were the biomarker response that was most
Chen, Mao Xiang; Gorman, Shelby A.; Benson, Bill; Singh, Kuljit; Hieble, J. Paul; Michel, Martin C.; Tate, Simon N.; Trezise, Derek J.
2004-01-01
The SK/IK family of small and intermediate conductance calcium-activated potassium channels contains four members, SK1, SK2, SK3 and IK1, and is important for the regulation of a variety of neuronal and non-neuronal functions. In this study we have analysed the distribution of these channels in
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita; Krühne, Ulrich; Nopens, Ingmar
2012-01-01
microbioreactor is simulated. A multiscale model consisting of the coupling of a population balance model, a kinetic model and a flow model was developed in order to predict simultaneously local concentrations of substrate (glucose), product (ethanol) and biomass, as well as the local cell size distributions....
John R. Jones
1985-01-01
Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Directory of Open Access Journals (Sweden)
Heather A. Bullen
2013-09-01
Full Text Available Dendrimers are highly customizable nanopolymers with qualities that make them ideal for drug delivery. The high binding affinity of biotin/avidin provides a useful approach to fluorescently label synthesized dendrimer-conjugates in cells and tissues. In addition, biotin may facilitate delivery of dendrimers through the blood-brain barrier (BBB via carrier-mediated endocytosis. The purpose of this research was to: (1 measure toxicity using lactate dehydrogenase (LDH assays of generation (G4 biotinylated and non-biotinylated poly(amidoamine (PAMAM dendrimers in a co-culture model of the BBB, (2 determine distribution of dendrimers in the rat brain, kidney, and liver following systemic administration of dendrimers, and (3 conduct atomic force microscopy (AFM on rat brain sections following systemic administration of dendrimers. LDH measurements showed that biotinylated dendrimers were toxic to cell co-culture after 48 h of treatment. Distribution studies showed evidence of biotinylated and non-biotinylated PAMAM dendrimers in brain. AFM studies showed evidence of dendrimers only in brain tissue of treated rats. These results indicate that biotinylation does not decrease toxicity associated with PAMAM dendrimers and that biotinylated PAMAM dendrimers distribute in the brain. Furthermore, this article provides evidence of nanoparticles in brain tissue following systemic administration of nanoparticles supported by both fluorescence microscopy and AFM.
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Zevenhuizen, L P; van Veldhuizen, A; Fokkens, R H
1990-04-01
Gel-filtration and thin layer chromatography of low molecular weight carbohydrates from culture filtrates of Agrobacterium radiobacter, Isolate II, have shown, that next to the neutral beta-1,2-glucan fraction a major acidic fraction was present which was found to be glycerophosphorylated cyclic beta-1,2-glucans. Re-examination of cyclic beta-1,2-glucan preparations which had been obtained by extraction of Rhizobium cells with hot phenol-water also showed these acidic modified beta-1,2-glucans to be present. Cyclic beta-1,2-glucans from R. leguminosarum (9 strains) and of R. phaseoli (1 strain) had ring size distribution with degrees of polymerisation (DPs) of 19 and 20 as major ring sizes of which a minor part was glycerophosphorylated; beta-1,2-glucans of R. trifolii (3 strains) had ring sizes with DPs measuring 19-22 as prominent components which were largely unsubstituted, and R. meliloti (7 strains) had beta-1,2-glucans with ring size distributions extending to still higher DPs of 19-25 of which the major part appeared to be glycerophosphorylated.
Xu, Xiang; Lin, Feng
2017-01-01
This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...
Laplacian manifold regularization method for fluorescence molecular tomography
He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei
2017-04-01
Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Saito, Akihiro; Saito, Misa; Ichikawa, Yusuke; Yoshiba, Masaaki; Tadano, Toshiaki; Miwa, Eitaro; Higuchi, Kyoko
2010-02-01
To evaluate Ni dynamics at the subcellular level, the distribution and speciation of Ni were determined in wild-type (WT) and Ni-tolerant (NIT) tobacco BY-2 cell lines. When exposed to low but toxic levels of Ni, NIT cells were found to contain 2.5-fold more Ni (14% of whole-cell Ni values) in their cell walls than WT cells (6% of whole-cell Ni values). In addition to higher levels of Ni in the apoplast, a higher proportion (94%) of symplastic Ni was localized in the vacuoles of NIT cells than in the vacuoles of WT cells (81%). The concentration of cytosolic Ni in the NIT cells was significantly lower (18 nmol g(-1) FW) than that in the WT cells (85 nmol g(-1) FW). In silico simulation showed that 95% of vacuolar Ni was in the form of Ni-citrate complexes, and that free Ni(2+) was virtually absent in the NIT cells. On the other hand, the amount of free metal ions was markedly increased in WT cells because free citrate was depleted by chelation of Ni. A protoplast viability assay using BCECF-AM further demonstrated that the main mechanism that confers strong Ni tolerance was present in the symplast as opposed to the cell wall.
Paiva, B; Pérez-Andrés, M; Vídriales, M-B; Almeida, J; de las Heras, N; Mateos, M-V; López-Corral, L; Gutiérrez, N C; Blanco, J; Oriol, A; Hernández, M T; de Arriba, F; de Coca, A G; Terol, M-J; de la Rubia, J; González, Y; Martín, A; Sureda, A; Schmidt-Hieber, M; Schmitz, A; Johnsen, H E; Lahuerta, J-J; Bladé, J; San-Miguel, J F; Orfao, A
2011-04-01
Disappearance of normal bone marrow (BM) plasma cells (PC) predicts malignant transformation of monoclonal gammopathy of undetermined significance (MGUS) and smoldering myeloma (SMM) into symptomatic multiple myeloma (MM). The homing, behavior and survival of normal PC, but also CD34(+) hematopoietic stem cells (HSC), B-cell precursors, and clonal PC largely depends on their interaction with stromal cell-derived factor-1 (SDF-1) expressing, potentially overlapping BM stromal cell niches. Here, we investigate the distribution, phenotypic characteristics and competitive migration capacity of these cell populations in patients with MGUS, SMM and MM vs healthy adults (HA) aged >60 years. Our results show that BM and peripheral blood (PB) clonal PC progressively increase from MGUS to MM, the latter showing a slightly more immature immunophenotype. Of note, such increased number of clonal PC is associated with progressive depletion of normal PC, B-cell precursors and CD34(+) HSC in the BM, also with a parallel increase in PB. In an ex vivo model, normal PC, B-cell precursors and CD34(+) HSC from MGUS and SMM, but not MM patients, were able to abrogate the migration of clonal PC into serial concentrations of SDF-1. Overall, our results show that progressive competition and replacement of normal BM cells by clonal PC is associated with more advanced disease in patients with MGUS, SMM and MM.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Chauvet, Sylvain; Boonen, Marielle; Chevallet, Mireille; Jarvis, Louis; Abebe, Addis; Benharouga, Mohamed; Faller, Peter; Jadot, Michel; Bouron, Alexandre
2015-11-01
The Na(+)/K(+)-ATPase interacts with the non-selective cation channels TRPC6 but the functional consequences of this association are unknown. Experiments performed with HEK cells over-expressing TRPC6 channels showed that inhibiting the activity of the Na(+)/K(+)-ATPase with ouabain reduced the amount of TRPC6 proteins and depressed Ca(2+) entry through TRPC6. This effect, not mimicked by membrane depolarization with KCl, was abolished by sucrose and bafilomycin-A, and was partially sensitive to the intracellular Ca(2+) chelator BAPTA/AM. Biotinylation and subcellular fractionation experiments showed that ouabain caused a multifaceted redistribution of TRPC6 to the plasma membrane and to an endo/lysosomal compartment where they were degraded. The amyloid beta peptide Aβ(1-40), another inhibitor of the Na(+)/K(+)-ATPase, but not the shorter peptide Aβ1-16, reduced TRPC6 protein levels and depressed TRPC6-mediated responses. In cortical neurons from embryonic mice, ouabain, veratridine (an opener of voltage-gated Na(+) channel), and Aβ(1-40) reduced TRPC6-mediated Ca(2+) responses whereas Aβ(1-16) was ineffective. Furthermore, when Aβ(1-40) was co-added together with zinc acetate it could no longer control TRPC6 activity. Altogether, this work shows the existence of a functional coupling between the Na(+)/K(+)-ATPase and TRPC6. It also suggests that the abundance, distribution and activity of TRPC6 can be regulated by cardiotonic steroids like ouabain and the naturally occurring peptide Aβ(1-40) which underlines the pathophysiological significance of these processes. Copyright © 2015 Elsevier B.V. All rights reserved.
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Wireless Cellular Mobile Communications
Zalud, V.
2002-01-01
In this article is briefly reviewed the history of wireless cellular mobile communications, examined the progress in current second generation (2G) cellular standards and discussed their migration to the third generation (3G). The European 2G cellular standard GSM and its evolution phases GPRS and EDGE are described somewhat in detail. The third generation standard UMTS taking up on GSM/GPRS core network and equipped with a new advanced access network on the basis of code division multiple ac...
Biomechanics of cellular solids.
Gibson, Lorna J
2005-03-01
Materials with a cellular structure are widespread in nature and include wood, cork, plant parenchyma and trabecular bone. Natural cellular materials are often mechanically efficient: the honeycomb-like microstructure of wood, for instance, gives it an exceptionally high performance index for resisting bending and buckling. Here we review the mechanics of a wide range of natural cellular materials and examine their role in lightweight natural sandwich structures (e.g. iris leaves) and natural tubular structures (e.g. plant stems or animal quills). We also describe two examples of engineered biomaterials with a cellular structure, designed to replace or regenerate tissue in the body.
Structural characterization of the packings of granular regular polygons.
Wang, Chuncheng; Dong, Kejun; Yu, Aibing
2015-12-01
By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Regularities of radiorace formation in yeasts
International Nuclear Information System (INIS)
Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)
1977-01-01
Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes
Wave dynamics of regular and chaotic rays
International Nuclear Information System (INIS)
McDonald, S.W.
1983-09-01
In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space
The regularized monotonicity method: detecting irregular indefinite inclusions
DEFF Research Database (Denmark)
Garde, Henrik; Staboulis, Stratos
2018-01-01
inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...
On the number of spanning trees in random regular graphs
DEFF Research Database (Denmark)
Greenhill, Catherine; Kwan, Matthew; Wind, David Kofoed
2014-01-01
Let d >= 3 be a fixed integer. We give an asympotic formula for the expected number of spanning trees in a uniformly random d-regular graph with n vertices. (The asymptotics are as n -> infinity, restricted to even n if d is odd.) We also obtain the asymptotic distribution of the number of spanning...
Linearizable cellular automata
International Nuclear Information System (INIS)
Nobe, Atsushi; Yura, Fumitaka
2007-01-01
The initial value problem for a class of reversible elementary cellular automata with periodic boundaries is reduced to an initial-boundary value problem for a class of linear systems on a finite commutative ring Z 2 . Moreover, a family of such linearizable cellular automata is given
Genus Ranges of 4-Regular Rigid Vertex Graphs.
Buck, Dorothy; Dolzhenko, Egor; Jonoska, Nataša; Saito, Masahico; Valencia, Karin
2015-01-01
A rigid vertex of a graph is one that has a prescribed cyclic order of its incident edges. We study orientable genus ranges of 4-regular rigid vertex graphs. The (orientable) genus range is a set of genera values over all orientable surfaces into which a graph is embedded cellularly, and the embeddings of rigid vertex graphs are required to preserve the prescribed cyclic order of incident edges at every vertex. The genus ranges of 4-regular rigid vertex graphs are sets of consecutive integers, and we address two questions: which intervals of integers appear as genus ranges of such graphs, and what types of graphs realize a given genus range. For graphs with 2 n vertices ( n > 1), we prove that all intervals [ a, b ] for all a genus ranges. For graphs with 2 n - 1 vertices ( n ≥ 1), we prove that all intervals [ a, b ] for all a genus ranges. We also provide constructions of graphs that realize these ranges.
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
Heterogeneous cellular networks
Hu, Rose Qingyang
2013-01-01
A timely publication providing coverage of radio resource management, mobility management and standardization in heterogeneous cellular networks The topic of heterogeneous cellular networks has gained momentum in industry and the research community, attracting the attention of standardization bodies such as 3GPP LTE and IEEE 802.16j, whose objectives are looking into increasing the capacity and coverage of the cellular networks. This book focuses on recent progresses, covering the related topics including scenarios of heterogeneous network deployment, interference management i
Cellular decomposition in vikalloys
International Nuclear Information System (INIS)
Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.
1981-01-01
Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru
Romanofsky, Robert R.
2010-01-01
The cellular reflectarray antenna is intended to replace conventional parabolic reflectors that must be physically aligned with a particular satellite in geostationary orbit. These arrays are designed for specified geographical locations, defined by latitude and longitude, each called a "cell." A particular cell occupies nominally 1,500 square miles (3,885 sq. km), but this varies according to latitude and longitude. The cellular reflectarray antenna designed for a particular cell is simply positioned to align with magnetic North, and the antenna surface is level (parallel to the ground). A given cellular reflectarray antenna will not operate in any other cell.
Regularities development of entrepreneurial structures in regions
Directory of Open Access Journals (Sweden)
Julia Semenovna Pinkovetskaya
2012-12-01
Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
Magnetohydrodynamics cellular automata
International Nuclear Information System (INIS)
Hatori, Tadatsugu.
1990-02-01
There has been a renewal of interest in cellular automata, partly because they give an architecture for a special purpose computer with parallel processing optimized to solve a particular problem. The lattice gas cellular automata are briefly surveyed, which are recently developed to solve partial differential equations such as hydrodynamics or magnetohydrodynamics. A new model is given in the present paper to implement the magnetic Lorentz force in a more deterministic and local procedure than the previous one. (author)
Epigenetics and Cellular Metabolism
Wenyi Xu; Fengzhong Wang; Zhongsheng Yu; Fengjiao Xin
2016-01-01
Living eukaryotic systems evolve delicate cellular mechanisms for responding to various environmental signals. Among them, epigenetic machinery (DNA methylation, histone modifications, microRNAs, etc.) is the hub in transducing external stimuli into transcriptional response. Emerging evidence reveals the concept that epigenetic signatures are essential for the proper maintenance of cellular metabolism. On the other hand, the metabolite, a main environmental input, can also influence the proce...
Matthäus, Franziska; Pahle, Jürgen
2017-01-01
This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.
Magnetohydrodynamic cellular automata
Energy Technology Data Exchange (ETDEWEB)
Hatori, Tadatsugu [National Inst. for Fusion Science, Nagoya (Japan)
1990-03-01
There has been a renewal of interest in cellular automata, partly because they give an architecture for a special purpose computer with parallel processing optimized to solve a particular problem. The lattice gas cellular automata are briefly surveyed, which are recently developed to solve partial differential equations such as hydrodynamics or magnetohydrodynamics. A new model is given in the present paper to implement the magnetic Lorentz force in a more deterministic and local procedure than the previous one. (author).
Magnetohydrodynamic cellular automata
International Nuclear Information System (INIS)
Hatori, Tadatsugu
1990-01-01
There has been a renewal of interest in cellular automata, partly because they give an architecture for a special purpose computer with parallel processing optimized to solve a particular problem. The lattice gas cellular automata are briefly surveyed, which are recently developed to solve partial differential equations such as hydrodynamics or magnetohydrodynamics. A new model is given in the present paper to implement the magnetic Lorentz force in a more deterministic and local procedure than the previous one. (author)
Directory of Open Access Journals (Sweden)
Michel Modo
2005-07-01
Full Text Available Cellular MR imaging is a young field that aims to visualize targeted cells in living organisms. In order to provide a different signal intensity of the targeted cell, they are either labeled with MR contrast agents in vivo or prelabeled in vitro. Either (ultrasmall superparamagnetic iron oxide [(USPIO] particles or (polymeric paramagnetic chelates can be used for this purpose. For in vivo cellular labeling, Gd3+- and Mn2+- chelates have mainly been used for targeted hepatobiliary imaging, and (USPIO-based cellular imaging has been focused on imaging of macrophage activity. Several of these magneto-pharmaceuticals have been FDA-approved or are in late-phase clinical trials. As for prelabeling of cells in vitro, a challenge has been to induce a sufficient uptake of contrast agents into nonphagocytic cells, without affecting normal cellular function. It appears that this issue has now largely been resolved, leading to an active research on monitoring the cellular biodistribution in vivo following transplantation or transfusion of these cells, including cell migration and trafficking. New applications of cellular MR imaging will be directed, for instance, towards our understanding of hematopoietic (immune cell trafficking and of novel guided (stem cell-based therapies aimed to be translated to the clinic in the future.
The relationship between synchronization and percolation for regular networks
Li, Zhe; Ren, Tao; Xu, Yanjie; Jin, Jianyu
2018-02-01
Synchronization and percolation are two essential phenomena in complex dynamical networks. They have been studied widely, but previously treated as unrelated. In this paper, the relationship between synchronization and percolation are revealed for regular networks. Firstly, we discovered a bridge between synchronization and percolation by using the eigenvalues of the Laplacian matrix to describe the synchronizability and using the eigenvalues of the adjacency matrix to describe the percolation threshold. Then, we proposed a method to find the relationship for regular networks based on the topology of networks. Particularly, if the degree distribution of the network is subject to delta function, we show that only the eigenvalues of the adjacency matrix need to be calculated. Finally, several examples are provided to demonstrate how to apply our proposed method to discover the relationship between synchronization and percolation for regular networks.
Selecting protein families for environmental features based on manifold regularization.
Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong
2014-06-01
Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
Numerical Study on Critical Wedge Angle of Cellular Detonation Reflections
International Nuclear Information System (INIS)
Gang, Wang; Kai-Xin, Liu; De-Liang, Zhang
2010-01-01
The critical wedge angle (CWA) for the transition from regular reflection (RR) to Mach reflection (MR) of a cellular detonation wave is studied numerically by an improved space-time conservation element and solution element method together with a two-step chemical reaction model. The accuracy of that numerical way is verified by simulating cellular detonation reflections at a 19.3° wedge. The planar and cellular detonation reflections over 45°–55° wedges are also simulated. When the cellular detonation wave is over a 50° wedge, numerical results show a new phenomenon that RR and MR occur alternately. The transition process between RR and MR is investigated with the local pressure contours. Numerical analysis shows that the cellular structure is the essential reason for the new phenomenon and the CWA of detonation reflection is not a certain angle but an angle range. (fundamental areas of phenomenology(including applications))
Iterative Method of Regularization with Application of Advanced Technique for Detection of Contours
International Nuclear Information System (INIS)
Niedziela, T.; Stankiewicz, A.
2000-01-01
This paper proposes a novel iterative method of regularization with application of an advanced technique for detection of contours. To eliminate noises, the properties of convolution of functions are utilized. The method can be accomplished in a simple neural cellular network, which creates the possibility of extraction of contours by automatic image recognition equipment. (author)
International Nuclear Information System (INIS)
Mori, N.; Kobayashi, K.
1996-01-01
A two-dimensional neutron diffusion equation is solved for regular polygonal regions by the finite Fourier transformation, and geometrical bucklings are calculated for regular 3-10 polygonal regions. In the case of the regular triangular region, it is found that a simple and rigorous analytic solution is obtained for the geometrical buckling and the distribution of the neutron current along the outer boundary. (author)
Geosocial process and its regularities
Vikulina, Marina; Vikulin, Alexander; Dolgaya, Anna
2015-04-01
Natural disasters and social events (wars, revolutions, genocides, epidemics, fires, etc.) accompany each other throughout human civilization, thus reflecting the close relationship of these phenomena that are seemingly of different nature. In order to study this relationship authors compiled and analyzed the list of the 2,400 natural disasters and social phenomena weighted by their magnitude that occurred during the last XXXVI centuries of our history. Statistical analysis was performed separately for each aggregate (natural disasters and social phenomena), and for particular statistically representative types of events. There was 5 + 5 = 10 types. It is shown that the numbers of events in the list are distributed by logarithmic law: the bigger the event, the less likely it happens. For each type of events and each aggregate the existence of periodicities with periods of 280 ± 60 years was established. Statistical analysis of the time intervals between adjacent events for both aggregates showed good agreement with Weibull-Gnedenko distribution with shape parameter less than 1, which is equivalent to the conclusion about the grouping of events at small time intervals. Modeling of statistics of time intervals with Pareto distribution allowed to identify the emergent property for all events in the aggregate. This result allowed the authors to make conclusion about interaction between natural disasters and social phenomena. The list of events compiled by authors and first identified properties of cyclicity, grouping and interaction process reflected by this list is the basis of modeling essentially unified geosocial process at high enough statistical level. Proof of interaction between "lifeless" Nature and Society is fundamental and provided a new approach to forecasting demographic crises with taking into account both natural disasters and social phenomena.
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Cellularized Cellular Solids via Freeze-Casting.
Christoph, Sarah; Kwiatoszynski, Julien; Coradin, Thibaud; Fernandes, Francisco M
2016-02-01
The elaboration of metabolically active cell-containing materials is a decisive step toward the successful application of cell based technologies. The present work unveils a new process allowing to simultaneously encapsulate living cells and shaping cell-containing materials into solid-state macroporous foams with precisely controlled morphology. Our strategy is based on freeze casting, an ice templating materials processing technique that has recently emerged for the structuration of colloids into macroporous materials. Our results indicate that it is possible to combine the precise structuration of the materials with cellular metabolic activity for the model organism Saccharomyces cerevisiae. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cellular modeling of fault-tolerant multicomputers
Energy Technology Data Exchange (ETDEWEB)
Morgan, G
1987-01-01
Work described was concerned with a novel method for investigation of fault tolerance in large regular networks of computers. Motivation was to provide a technique useful in rapid evaluation of highly reliable systems that exploit the low cost and ease of volume production of simple microcomputer components. First, a system model and simulator based upon cellular automata are developed. This model is characterized by its simplicity and ease of modification when adapting to new types of network. Second, in order to test and verify the predictive capabilities of the cellular system, a more-detailed simulation is performed based upon an existing computational model, that of the Transputer. An example application is used to exercise various systems designed using the cellular model. Using this simulator, experimental results are obtained both for existing well-understood configurations and for more novel types also developed here. In all cases it was found that the cellular model and simulator successfully predicted the ranking in reliability improvement of the systems studied.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Energy Technology Data Exchange (ETDEWEB)
Rojas C, E. L. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Santos C, C. L. [Universidad Autonoma del Estado de Mexico, Paseo Tollocan y Jesus Carranza, Toluca 50120, Estado de Mexico (Mexico)], e-mail: leticia.rojas@inin.gob.mx
2009-10-15
The {sup 188}Re is a radionuclide of radiation gamma emitter, useful in obtaining of gamma-graphic images, but it is also emitter of beta radiations and Auger electrons. A bio-molecule directed to a specific receptor of a cancer cell labeled with a emitter radionuclide of beta particles and Auger electrons, as the {sup 188}Re-Tat-Bombesin, it has the potential to be used in radiotherapy of molecular targets for its capacity to penetrate to cellular nucleus. In this system, the radiation dose is distributed in way located at microscopic levels in sub cellular specific places, where Auger emissions contributes of significant way in absorbed dose. The cellular dosimetry is realized in most of cases, using analytic or semi analytical methods, for example the cellular MIRD methodology. However, it is required to complement these calculations simulating the electrons transport and considering experimental bio kinetics data. Therefore, in this work preliminary results are presented of dosimetric calculation to sub cellular level for {sup 188}Re-Tat-Bombesin by Monte Carlo simulation, using the 2008 version of PENELOPE: PENEASY code. The spatial distribution of absorbed dose in membrane, cytoplasm and nucleus, was calculated with geometry of a cell of 10 {mu}m of diameter, a nucleus of 2 {mu}m of ratio and membrane of 0.2 {mu}m of thickness, considering elementary constitution for each cellular compartment proposal in literature. The total number of disintegrations at sub cellular level was evaluated integrating the activity in function of time starting from experimental bio kinetics data in mamma cancer cells MDA-MB231. The preliminary results show that 46.4% of total disintegrations for unit of captured activity by cell occurs in nucleus, 38.4% in membrane and 15.2% in cytoplasm. The due absorbed dose to Auger electrons for 1 Bq of {sup 188}Re located in cellular membrane were respectively of 1.32E-1 and 1.43E-1 Gy in cytoplasm and nucleus. (Author)
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Energy Technology Data Exchange (ETDEWEB)
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used
Epigenetics and Cellular Metabolism
Directory of Open Access Journals (Sweden)
Wenyi Xu
2016-01-01
Full Text Available Living eukaryotic systems evolve delicate cellular mechanisms for responding to various environmental signals. Among them, epigenetic machinery (DNA methylation, histone modifications, microRNAs, etc. is the hub in transducing external stimuli into transcriptional response. Emerging evidence reveals the concept that epigenetic signatures are essential for the proper maintenance of cellular metabolism. On the other hand, the metabolite, a main environmental input, can also influence the processing of epigenetic memory. Here, we summarize the recent research progress in the epigenetic regulation of cellular metabolism and discuss how the dysfunction of epigenetic machineries influences the development of metabolic disorders such as diabetes and obesity; then, we focus on discussing the notion that manipulating metabolites, the fuel of cell metabolism, can function as a strategy for interfering epigenetic machinery and its related disease progression as well.
Regularization of the Coulomb scattering problem
International Nuclear Information System (INIS)
Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.
2004-01-01
The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
The cellular basis of organ ageing
Knook, D.L.
1978-01-01
Ageing is associated with declines in the functional capacities of several organs. General causes for the decline can be divided into: 1. intrinsic cellular causes and 2. extracellular causes, e.g., changes in blood circulation and distribution. For the first group of causes, there is evidence for a
Wireless Cellular Mobile Communications
Directory of Open Access Journals (Sweden)
V. Zalud
2002-12-01
Full Text Available In this article is briefly reviewed the history of wireless cellularmobile communications, examined the progress in current secondgeneration (2G cellular standards and discussed their migration to thethird generation (3G. The European 2G cellular standard GSM and itsevolution phases GPRS and EDGE are described somewhat in detail. Thethird generation standard UMTS taking up on GSM/GPRS core network andequipped with a new advanced access network on the basis of codedivision multiple access (CDMA is investigated too. A sketch of theperspective of mobile communication beyond 3G concludes this article.
Electrically heated 3D-macro cellular SiC structures for ignition and combustion application
International Nuclear Information System (INIS)
Falgenhauer, Ralf; Rambacher, Patrick; Schlier, Lorenz; Volkert, Jochen; Travitzky, Nahum; Greil, Peter; Weclas, Miroslaw
2017-01-01
Highlights: • 3D-printed macro cellular SiC structure. • Directly integrated electrically heated ignition element used in combustion reactor. • Experimental investigation of the ignition process. - Abstract: The paper describes different aspects of porous combustion reactor operation especially at cold start conditions. Under cold start conditions it is necessary to increase the internal energy of the combustion reactor, to accumulate enough energy inside its solid phase and to reach at least the ignition temperature on the reactors inner surface. The most practicable method to preheat a cold porous reactor is to use its surface as a flame holder and to apply free flame combustion as a heat source for the preheating process. This paper presents a new electrically heated ignition element, which gets integrated in a three dimensional macro-cellular SiSiC reactor structure. For the development of the ignition element it was assumed, that the element is made of the same material as the combustion reactor itself and is fully integrated within the three-dimensional macro-cellular structure of the combustion reactor. Additive manufacturing like three-dimensional (3D) printing permits the production of regular SiSiC structures with constant strut thickness and a defined current flow path. To get a controlled temperature distribution on the ignition element it is necessary to control the current density distribution in the three-dimensional macro-cellular reactor structure. The ignition element used is designed to be an electrical resistance in an electric current system, converting flowing current into heat with the goal to get the highest temperature in the ignition region (glow plug). First experiments show that the ignition element integrated in a combustion reactor exhibits high dynamics and can be heated to the temperatures much above 1000 °C in a very short time (approx. 800 ms) for current of I = 150 A.
Cellular vs. organ approaches to dose estimates
International Nuclear Information System (INIS)
Adelstein, S.J.; Kassis, A.I.; Sastry, K.S.R.
1986-01-01
The cellular distribution of tissue-incorporated radionuclides has generally been neglected in the dosimetry of internal emitters. Traditional dosimetry assumes homogeneous distribution of radionuclides in organs of interest, while presuming that the ranges of particulate radiations are large relative to typical cell diameters. The macroscopic distribution of dose thus calculated has generally served as a sufficient approximation for the energy deposited within radiosensitive sites. However, with the increasing utilization of intracellular agents, such as thallium-201, it has become necessary to examine the microscopic distribution of energy at the cellular level. This is particularly important in the instance of radionuclides that decay by electron capture or by internal conversion with the release of Auger and Coster-Kronig electrons. In many instances, these electrons are released as a dense shower of low-energy particles with ranges of subcellular dimensions. The high electron density in the immediate vicinity of the decaying atom produces a focal deposition of energy that far exceeds the average dose taken over several cell diameters. These studies point out the increasing need to take into account the microscopic distribution of dose on the cellular level as radionuclides distributed in cells become more commonplace, especially if the decay involves electron capture or internal conversion. As radiotracers are developed for the measurement of intracellular functions these factors should be given greater consideration. 16 references, 5 figures, 5 tables
Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation
International Nuclear Information System (INIS)
Bardsley, Johnathan M; Goldes, John
2009-01-01
In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness
Manifold regularized multitask feature learning for multimodality disease classification.
Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang
2015-02-01
Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. © 2014 Wiley Periodicals, Inc.
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
Cellular dosimetry in nuclear medicine imaging: training
International Nuclear Information System (INIS)
Gardin, I.; Faraggi, M.; Stievenart, J.L.; Le Guludec, D.; Bok, B.
1998-01-01
The radionuclides used in nuclear medicine imaging emit not only diagnostically useful photons, but also energy electron emissions, responsible for dose heterogeneity at the cellular level. The mean dose delivered to the cell nucleus by electron emissions of 99m Tc, 123 I, 111 In, 67 Ga, and 201 Tl, has been calculated, for the cell nucleus, a cytoplasmic and a cell membrane distribution of radioactivity. This model takes into account both the self-dose which results from the radionuclide located in the target cell, and the cross-dose, which comes from the surrounding cells. The results obtained by cellular dosimetry (D cel ) have been compared with those obtained with conventional dosimetry (D conv ), by assuming the same amount of radioactivity per cell. Cellular dosimetry shows, for a cytoplasmic and a cell membrane distributions of radioactivity, that the main contribution to the dose to the cell nucleus, comes from the surrounding cells. On the other hand, for a cell nucleus distribution of radioactivity, the self-dose is not negligible and may be the main contribution. The comparison between cellular and conventional dosimetry shows that D cel /D conv ratio ranges from 0.61 and O.89, in case of a cytoplasmic and a cell membrane distributions of radioactivity, depending on the radionuclide and cell dimensions. Thus, conventional dosimetry slightly overestimates the mean dose to the cell nucleus. On the other hand, D cel /D conv ranges from 1.1 to 75, in case of a cell nucleus distribution of radioactivity. Conventional dosimetry may strongly underestimates the absorbed dose to the nucleus, when radioactivity is located in the nucleus. The study indicates that in nuclear medicine imaging, cellular dosimetry may lead to a better understanding of biological effects of radiopharmaceuticals. (authors)
Radiolabelled cellular blood elements
International Nuclear Information System (INIS)
Sinzinger, H.
1990-01-01
This book reports on radiolabelled cellular blood elements, covering new advances made during the past several years, in particular the use of Tc-99 as a tracer for blood elements. Coverage extends to several radiolabelled monoclonal antibodies that are specific for blood components and may label blood elements in vivo
Building synthetic cellular organization
Polka, Jessica K.; Silver, Pamela A.
2013-01-01
The elaborate spatial organization of cells enhances, restricts, and regulates protein–protein interactions. However, the biological significance of this organization has been difficult to study without ways of directly perturbing it. We highlight synthetic biology tools for engineering novel cellular organization, describing how they have been, and can be, used to advance cell biology.
Claman, Henry N.
1973-01-01
Discusses the nature of the immune response and traces many of the discoveries that have led to the present state of knowledge in immunology. The new cellular immunology is directing its efforts toward improving health by proper manipulation of the immune mechanisms of the body. (JR)
Electromagnetic cellular interactions.
Cifra, Michal; Fields, Jeremy Z; Farhadi, Ashkan
2011-05-01
Chemical and electrical interaction within and between cells is well established. Just the opposite is true about cellular interactions via other physical fields. The most probable candidate for an other form of cellular interaction is the electromagnetic field. We review theories and experiments on how cells can generate and detect electromagnetic fields generally, and if the cell-generated electromagnetic field can mediate cellular interactions. We do not limit here ourselves to specialized electro-excitable cells. Rather we describe physical processes that are of a more general nature and probably present in almost every type of living cell. The spectral range included is broad; from kHz to the visible part of the electromagnetic spectrum. We show that there is a rather large number of theories on how cells can generate and detect electromagnetic fields and discuss experimental evidence on electromagnetic cellular interactions in the modern scientific literature. Although small, it is continuously accumulating. Copyright © 2010 Elsevier Ltd. All rights reserved.
Genetic Dominance & Cellular Processes
Seager, Robert D.
2014-01-01
In learning genetics, many students misunderstand and misinterpret what "dominance" means. Understanding is easier if students realize that dominance is not a mechanism, but rather a consequence of underlying cellular processes. For example, metabolic pathways are often little affected by changes in enzyme concentration. This means that…
[Cellular subcutaneous tissue. Anatomic observations].
Marquart-Elbaz, C; Varnaison, E; Sick, H; Grosshans, E; Cribier, B
2001-11-01
We showed in a companion paper that the definition of the French "subcutaneous cellular tissue" considerably varied from the 18th to the end of the 20th centuries and has not yet reached a consensus. To address the anatomic reality of this "subcutaneous cellular tissue", we investigated the anatomic structures underlying the fat tissue in normal human skin. Sixty specimens were excised from the surface to the deep structures (bone, muscle, cartilage) on different body sites of 3 cadavers from the Institut d'Anatomie Normale de Strasbourg. Samples were paraffin-embedded, stained and analysed with a binocular microscope taking x 1 photographs. Specimens were also excised and fixed after subcutaneous injection of Indian ink, after mechanic tissue splitting and after performing artificial skin folds. The aspects of the deep parts of the skin greatly varied according to their anatomic localisation. Below the adipose tissue, we often found a lamellar fibrous layer which extended from the interlobular septa and contained horizontally distributed fat cells. No specific tissue below the hypodermis was observed. Artificial skin folds concerned either exclusively the dermis, when they were superficial or included the hypodermis, but no specific structure was apparent in the center of the fold. India ink diffused to the adipose tissue, mainly along the septa, but did not localise in a specific subcutaneous compartment. This study shows that the histologic aspects of the deep part of the skin depend mainly on the anatomic localisation. Skin is composed of epidermis, dermis and hypodermis and thus the hypodermis can not be considered as being "subcutaneous". A difficult to individualise, fibrous lamellar structure in continuity with the interlobular septa is often found under the fat lobules. This structure is a cleavage line, as is always the case with loose connective tissues, but belongs to the hypodermis (i.e. fat tissue). No specific tissue nor any virtual space was
International Nuclear Information System (INIS)
Alili, Smail; Rugh, Hans Henrik
2008-01-01
We consider a supercritical branching process in time-dependent environment ξ. We assume that the offspring distributions depend regularly (C k or real-analytically) on real parameters λ. We show that the extinction probability q λ (ξ), given the environment ξ 'inherits' this regularity whenever the offspring distributions satisfy a condition of contraction-type. Our proof makes use of the Poincaré metric on the complex unit disc and a real-analytic implicit function theorem
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
Cellular autofluorescence imaging for early diagnosis of cancers
Steenkeste, Karine; Deniset, Ariane; Lecart, Sandrine; Leveque-Fort, Sandrine; Fontaine-Aupart, Marie-Pierre; Ferlicot, Sophie; Eschwege, Pascal
2005-08-01
Urinary cytology is employed in diagnostic guidelines of bladder cancer in anatomo-pathological laboratories mostly for its ability to diagnose non detectable cancers using cystoscopy, but also because it is a non-invasive and non-constraining technique for a regular follow-up of the more exposed populations. The impossibility to detect such cancers is mainly due to their localization either in the bladder or in the upper urinary tract and the prostate. However, urinary cytology lacks sensitivity, especially for the detection of low grade low stage tumors due to inherent limitation of morphological criteria to distinguish low grade tumor cells from normal urothelial cells. For this purpose, we developed, in addition to urinary cytology, an original screening of these cytological slides by using spectrally-resolved and time-resolved fluorescence as a contrast factor, without changing any parameters in the cytological slide preparation. This method takes advantage of a femtosecond Ti:sapphire laser, continuously tunable in the spectral range 700-950 nm allowing the observation of most endogenous cellular chromophores by biphotonic excitation. A commercial confocal microscope was also used in the measurements allowing an excitation of the samples between 458 nm and 633 nm. We observed that the fluorescence emission is differentially distributed in normal and pathological urothelial cells. Spectral- and time-resolved measurements attested this difference over about one hundred cases which have been tested to confirm the high accuracy of this non-invasive technique.
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA
Regular graph construction for semi-supervised learning
International Nuclear Information System (INIS)
Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang
2014-01-01
Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN
A Novel Kernel-Based Regularization Technique for PET Image Reconstruction
Directory of Open Access Journals (Sweden)
Abdelwahhab Boudjelal
2017-06-01
Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.
Sensing Phosphatidylserine in Cellular Membranes
Directory of Open Access Journals (Sweden)
Jason G. Kay
2011-01-01
Full Text Available Phosphatidylserine, a phospholipid with a negatively charged head-group, is an important constituent of eukaryotic cellular membranes. On the plasma membrane, rather than being evenly distributed, phosphatidylserine is found preferentially in the inner leaflet. Disruption of this asymmetry, leading to the appearance of phosphatidylserine on the surface of the cell, is known to play a central role in both apoptosis and blood clotting. Despite its importance, comparatively little is known about phosphatidylserine in cells: its precise subcellular localization, transmembrane topology and intracellular dynamics are poorly characterized. The recent development of new, genetically-encoded probes able to detect phosphatidylserine within live cells, however, is leading to a more in-depth understanding of the biology of this phospholipid. This review aims to give an overview of the current methods for phosphatidylserine detection within cells, and some of the recent realizations derived from their use.
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Molecular and Cellular Signaling
Beckerman, Martin
2005-01-01
A small number of signaling pathways, no more than a dozen or so, form a control layer that is responsible for all signaling in and between cells of the human body. The signaling proteins belonging to the control layer determine what kinds of cells are made during development and how they function during adult life. Malfunctions in the proteins belonging to the control layer are responsible for a host of human diseases ranging from neurological disorders to cancers. Most drugs target components in the control layer, and difficulties in drug design are intimately related to the architecture of the control layer. Molecular and Cellular Signaling provides an introduction to molecular and cellular signaling in biological systems with an emphasis on the underlying physical principles. The text is aimed at upper-level undergraduates, graduate students and individuals in medicine and pharmacology interested in broadening their understanding of how cells regulate and coordinate their core activities and how diseases ...
International Nuclear Information System (INIS)
Quasthoff, U.
1985-07-01
Cellular automata by definition consist of a finite or infinite number of cells, say of unit length, with each cell having the same transition function. These cells are usually considered as the smallest elements and so the space filled with these cells becomes discrete. Nevertheless, large pictures created by such cellular automata look very fractal. So we try to replace each cell by a couple of smaller cells, which have the same transition functions as the large ones. There are automata where this replacement does not destroy the macroscopic structure. In these cases this nesting process can be iterated. The paper contains large classes of automata with the above properties. In the case of one dimensional automata with two states and next neighbour interaction and a nesting function of the same type a complete classification is given. (author)
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
Wavefront cellular learning automata.
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2018-02-01
This paper proposes a new cellular learning automaton, called a wavefront cellular learning automaton (WCLA). The proposed WCLA has a set of learning automata mapped to a connected structure and uses this structure to propagate the state changes of the learning automata over the structure using waves. In the WCLA, after one learning automaton chooses its action, if this chosen action is different from the previous action, it can send a wave to its neighbors and activate them. Each neighbor receiving the wave is activated and must choose a new action. This structure for the WCLA is necessary in many dynamic areas such as social networks, computer networks, grid computing, and web mining. In this paper, we introduce the WCLA framework as an optimization tool with diffusion capability, study its behavior over time using ordinary differential equation solutions, and present its accuracy using expediency analysis. To show the superiority of the proposed WCLA, we compare the proposed method with some other types of cellular learning automata using two benchmark problems.
Algorithm for cellular reprogramming.
Ronquist, Scott; Patterson, Geoff; Muir, Lindsey A; Lindsly, Stephen; Chen, Haiming; Brown, Markus; Wicha, Max S; Bloch, Anthony; Brockett, Roger; Rajapakse, Indika
2017-11-07
The day we understand the time evolution of subcellular events at a level of detail comparable to physical systems governed by Newton's laws of motion seems far away. Even so, quantitative approaches to cellular dynamics add to our understanding of cell biology. With data-guided frameworks we can develop better predictions about, and methods for, control over specific biological processes and system-wide cell behavior. Here we describe an approach for optimizing the use of transcription factors (TFs) in cellular reprogramming, based on a device commonly used in optimal control. We construct an approximate model for the natural evolution of a cell-cycle-synchronized population of human fibroblasts, based on data obtained by sampling the expression of 22,083 genes at several time points during the cell cycle. To arrive at a model of moderate complexity, we cluster gene expression based on division of the genome into topologically associating domains (TADs) and then model the dynamics of TAD expression levels. Based on this dynamical model and additional data, such as known TF binding sites and activity, we develop a methodology for identifying the top TF candidates for a specific cellular reprogramming task. Our data-guided methodology identifies a number of TFs previously validated for reprogramming and/or natural differentiation and predicts some potentially useful combinations of TFs. Our findings highlight the immense potential of dynamical models, mathematics, and data-guided methodologies for improving strategies for control over biological processes. Copyright © 2017 the Author(s). Published by PNAS.
Wavefront cellular learning automata
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2018-02-01
This paper proposes a new cellular learning automaton, called a wavefront cellular learning automaton (WCLA). The proposed WCLA has a set of learning automata mapped to a connected structure and uses this structure to propagate the state changes of the learning automata over the structure using waves. In the WCLA, after one learning automaton chooses its action, if this chosen action is different from the previous action, it can send a wave to its neighbors and activate them. Each neighbor receiving the wave is activated and must choose a new action. This structure for the WCLA is necessary in many dynamic areas such as social networks, computer networks, grid computing, and web mining. In this paper, we introduce the WCLA framework as an optimization tool with diffusion capability, study its behavior over time using ordinary differential equation solutions, and present its accuracy using expediency analysis. To show the superiority of the proposed WCLA, we compare the proposed method with some other types of cellular learning automata using two benchmark problems.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
Environment Aware Cellular Networks
Ghazzai, Hakim
2015-02-01
The unprecedented rise of mobile user demand over the years have led to an enormous growth of the energy consumption of wireless networks as well as the greenhouse gas emissions which are estimated currently to be around 70 million tons per year. This significant growth of energy consumption impels network companies to pay huge bills which represent around half of their operating expenditures. Therefore, many service providers, including mobile operators, are looking for new and modern green solutions to help reduce their expenses as well as the level of their CO2 emissions. Base stations are the most power greedy element in cellular networks: they drain around 80% of the total network energy consumption even during low traffic periods. Thus, there is a growing need to develop more energy-efficient techniques to enhance the green performance of future 4G/5G cellular networks. Due to the problem of traffic load fluctuations in cellular networks during different periods of the day and between different areas (shopping or business districts and residential areas), the base station sleeping strategy has been one of the main popular research topics in green communications. In this presentation, we present several practical green techniques that provide significant gains for mobile operators. Indeed, combined with the base station sleeping strategy, these techniques achieve not only a minimization of the fossil fuel consumption but also an enhancement of mobile operator profits. We start with an optimized cell planning method that considers varying spatial and temporal user densities. We then use the optimal transport theory in order to define the cell boundaries such that the network total transmit power is reduced. Afterwards, we exploit the features of the modern electrical grid, the smart grid, as a new tool of power management for cellular networks and we optimize the energy procurement from multiple energy retailers characterized by different prices and pollutant
Thermal expansion behavior in fabricated cellular structures
International Nuclear Information System (INIS)
Oruganti, R.K.; Ghosh, A.K.; Mazumder, J.
2004-01-01
Thermal expansion behavior of cellular structures is of interest in applications where undesirable deformation and failure are caused by thermal expansion mismatch. This report describes the role of processing-induced effects and metallurgical aspects of melt-processed cellular structures, such as a bi-material structure designed to contract on heating, as well as uni-material structures of regular and stochastic topology. This bi-material structure utilized the principle of internal geometric constraints to alter the expansion behavior of the internal ligaments to create overall contraction of the structure. Homogenization design method was used to design the structure, and fabrication was by direct metal deposition by laser melting of powder in another part of a joint effort. The degree of porosity and grain size in the fabricated structure are characterized and related to the laser deposition parameters. The structure was found to contract upon heating over a short range of temperature subsequent to which normal expansion ensued. Also examined in this report are uni-material cellular structures, in which internal constraints arise from residual stress variations caused by the fabrication process, and thereby alter their expansion characteristics. A simple analysis of thermal strain of this material supports the observed thermal expansion behavior
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
A new approach to nonlinear constrained Tikhonov regularization
Ito, Kazufumi
2011-09-16
We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.
High-resolution seismic data regularization and wavefield separation
Cao, Aimin; Stump, Brian; DeShon, Heather
2018-04-01
We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.
Cosserat modeling of cellular solids
Onck, P.R.
Cellular solids inherit their macroscopic mechanical properties directly from the cellular microstructure. However, the characteristic material length scale is often not small compared to macroscopic dimensions, which limits the applicability of classical continuum-type constitutive models. Cosserat
Evaluation of Structural Cellular Glass
Adams, M. A.; Zwissler, J. G.
1984-01-01
Preliminary design information presented. First report discusses state of structural-cellular-glass programs as of June 1979. Second report gives further details of program to develop improved cellular glasses and to characterize properties of glasses and commercially available materials.
Connection machine: a computer architecture based on cellular automata
Energy Technology Data Exchange (ETDEWEB)
Hillis, W D
1984-01-01
This paper describes the connection machine, a programmable computer based on cellular automata. The essential idea behind the connection machine is that a regular locally-connected cellular array can be made to behave as if the processing cells are connected into any desired topology. When the topology of the machine is chosen to match the topology of the application program, the result is a fast, powerful computing engine. The connection machine was originally designed to implement knowledge retrieval operations in artificial intelligence programs, but the hardware and the programming techniques are apparently applicable to a much larger class of problems. A machine with 100000 processing cells is currently being constructed. 27 references.
Cellular communication through light.
Directory of Open Access Journals (Sweden)
Daniel Fels
Full Text Available Information transfer is a fundamental of life. A few studies have reported that cells use photons (from an endogenous source as information carriers. This study finds that cells can have an influence on other cells even when separated with a glass barrier, thereby disabling molecule diffusion through the cell-containing medium. As there is still very little known about the potential of photons for intercellular communication this study is designed to test for non-molecule-based triggering of two fundamental properties of life: cell division and energy uptake. The study was performed with a cellular organism, the ciliate Paramecium caudatum. Mutual exposure of cell populations occurred under conditions of darkness and separation with cuvettes (vials allowing photon but not molecule transfer. The cell populations were separated either with glass allowing photon transmission from 340 nm to longer waves, or quartz being transmittable from 150 nm, i.e. from UV-light to longer waves. Even through glass, the cells affected cell division and energy uptake in neighboring cell populations. Depending on the cuvette material and the number of cells involved, these effects were positive or negative. Also, while paired populations with lower growth rates grew uncorrelated, growth of the better growing populations was correlated. As there were significant differences when separating the populations with glass or quartz, it is suggested that the cell populations use two (or more frequencies for cellular information transfer, which influences at least energy uptake, cell division rate and growth correlation. Altogether the study strongly supports a cellular communication system, which is different from a molecule-receptor-based system and hints that photon-triggering is a fine tuning principle in cell chemistry.
Engineering Cellular Metabolism
DEFF Research Database (Denmark)
Nielsen, Jens; Keasling, Jay
2016-01-01
Metabolic engineering is the science of rewiring the metabolism of cells to enhance production of native metabolites or to endow cells with the ability to produce new products. The potential applications of such efforts are wide ranging, including the generation of fuels, chemicals, foods, feeds...... of metabolic engineering and will discuss how new technologies can enable metabolic engineering to be scaled up to the industrial level, either by cutting off the lines of control for endogenous metabolism or by infiltrating the system with disruptive, heterologous pathways that overcome cellular regulation....
Cellular mechanics and motility
Hénon, Sylvie; Sykes, Cécile
2015-10-01
The term motility defines the movement of a living organism. One widely known example is the motility of sperm cells, or the one of flagellar bacteria. The propulsive element of such organisms is a cilium(or flagellum) that beats. Although cells in our tissues do not have a flagellum in general, they are still able to move, as we will discover in this chapter. In fact, in both cases of movement, with or without a flagellum, cell motility is due to a dynamic re-arrangement of polymers inside the cell. Let us first have a closer look at the propulsion mechanism in the case of a flagellum or a cilium, which is the best known, but also the simplest, and which will help us to define the hydrodynamic general conditions of cell movement. A flagellum is sustained by cellular polymers arranged in semi-flexible bundles and flagellar beating generates cell displacement. These polymers or filaments are part of the cellular skeleton, or "cytoskeleton", which is, in this case, external to the cellular main body of the organism. In fact, bacteria move in a hydrodynamic regime in which viscosity dominates over inertia. The system is thus in a hydrodynamic regime of low Reynolds number (Box 5.1), which is nearly exclusively the case in all cell movements. Bacteria and their propulsion mode by flagella beating are our unicellular ancestors 3.5 billion years ago. Since then, we have evolved to form pluricellular organisms. However, to keep the ability of displacement, to heal our wounds for example, our cells lost their flagellum, since it was not optimal in a dense cell environment: cells are too close to each other to leave enough space for the flagella to accomplish propulsion. The cytoskeleton thus developed inside the cell body to ensure cell shape changes and movement, and also mechanical strength within a tissue. The cytoskeleton of our cells, like the polymers or filaments that sustain the flagellum, is also composed of semi-flexible filaments arranged in bundles, and also in
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
A local cellular model for growth on quasicrystals
International Nuclear Information System (INIS)
Chidyagwai, Prince; Reiter, Clifford A.
2005-01-01
The growth of real valued cellular automata using a deterministic algorithm on 2-dimensional quasicrystalline structures is investigated. Quasicrystals are intermediate between the rigid organization of crystals and disorganized random structures. Since the quasicrystalline structures may be highly symmetric or not, we are able to obtain highly organized and relatively random growth patterns. This deterministic growth produces dendrite, sector, stellar, regular polygons, round, and random DLA-like structures
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
DEFF Research Database (Denmark)
Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger
2008-01-01
, depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly......Living organisms are distributed over the entire surface of the planet. The distribution of the individuals of each species is not random; on the contrary, they are strongly dependent on the biology and ecology of the species, and vary over different spatial scale. The structure of whole...... populations reflects the location and fragmentation pattern of the habitat types preferred by the species, and the complex dynamics of migration, colonization, and population growth taking place over the landscape. Within these, individuals are distributed among each other in regular or clumped patterns...
SAR image regularization with fast approximate discrete minimization.
Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc
2009-07-01
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
Intractable problems in reversible cellular automata
International Nuclear Information System (INIS)
Vatan, F.
1988-01-01
The billiard ball model, a classical mechanical system in which all parameters are real variables, can perform all digital computations. An eight-state, 11-neighbor reversible cellular automaton (an entirely discrete system in which all parameters are integer variables) can simulate this model. One of the natural problems for this system is to determine the shape of a container so that they initial specific distribution of gas molecules eventually leads to a predetermined distribution. This problem if PSPACE-complete. Related intractable and decidable problems are discussed as well
Temporal organization of cellular self-replication
Alexandrov, Victor; Pugatch, Rami
Recent experiments demonstrate that single cells grow exponentially in time. A coarse grained model of cellular self-replication is presented based on a novel concept - the cell is viewed as a self-replicating queue. This allows to have a more fundamental look into various temporal organizations and, importantly, the inherent non-Markovianity of noise distributions. As an example, the distribution of doubling times can be inferred and compared to single cell experiments in bacteria. We observe data collapse upon scaling by the average doubling time for different environments and present an inherent task allocation trade-off. Support from the Simons Center for Systems Biology, IAS, Princeon.
International Nuclear Information System (INIS)
Anon.
1982-01-01
Testing the applicability of mathematical models with carefully designed experiments is a powerful tool in the investigations of the effects of ionizing radiation on cells. The modeling and cellular studies complement each other, for modeling provides guidance for designing critical experiments which must provide definitive results, while the experiments themselves provide new input to the model. Based on previous experimental results the model for the accumulation of damage in Chlamydomonas reinhardi has been extended to include various multiple two-event combinations. Split dose survival experiments have shown that models tested to date predict most but not all the observed behavior. Stationary-phase mammalian cells, required for tests of other aspects of the model, have been shown to be at different points in the cell cycle depending on how they were forced to stop proliferating. These cultures also demonstrate different capacities for repair of sublethal radiation damage
Multi-omic data integration enables discovery of hidden biological regularities
DEFF Research Database (Denmark)
Ebrahim, Ali; Brunk, Elizabeth; Tan, Justin
2016-01-01
Rapid growth in size and complexity of biological data sets has led to the 'Big Data to Knowledge' challenge. We develop advanced data integration methods for multi- level analysis of genomic, transcriptomic, ribosomal profiling, proteomic and fluxomic data. First, we show that pairwise integration...... of primary omics data reveals regularities that tie cellular processes together in Escherichia coli: the number of protein molecules made per mRNA transcript and the number of ribosomes required per translated protein molecule. Second, we show that genome- scale models, based on genomic and bibliomic data......, enable quantitative synchronization of disparate data types. Integrating omics data with models enabled the discovery of two novel regularities: condition invariant in vivo turnover rates of enzymes and the correlation of protein structural motifs and translational pausing. These regularities can...
Endpoint singularities in unintegrated parton distributions
Hautmann, F
2007-01-01
We examine the singular behavior from the endpoint region x -> 1 in parton distributions unintegrated in both longitudinal and transverse momenta. We identify and regularize the singularities by using the subtraction method, and compare this with the cut-off regularization method. The counterterms for the distributions with subtractive regularization are given in coordinate space by compact all-order expressions in terms of eikonal-line operators. We carry out an explicit calculation at one loop for the unintegrated quark distribution. We discuss the relation of the unintegrated parton distributions in subtractive regularization with the ordinary parton distributions.
Formation factor of regular porous pattern in poly-α-methylstyrene film
International Nuclear Information System (INIS)
Yang Ruizhuang; Xu Jiajing; Gao Cong; Ma Shuang; Chen Sufen; Luo Xuan; Fang Yu; Li Bo
2015-01-01
Regular poly-α-methylstyrene (PAMS) porous film with macron-sized cells was prepared by casting the solution in the condition with high humidity. In this paper, the effects of the molecular weight of PAMS, PAMS concentration, humidity, temperature, volatile solvents and the thickness of liquid of solution on formation of regular porous pattern in PAMS film were discussed. The results show that these factors significantly affect the pore size and the pore distribution. The capillary force and Benard-Marangoni convection are main driving forces for the water droplet moving and making pores regular arrangement. (authors)
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.
Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A
2013-02-01
The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Increasing cellular coverage within integrated terrestrial/satellite mobile networks
Castro, Jonathan P.
1995-01-01
When applying the hierarchical cellular concept, the satellite acts as giant umbrella cell covering a region with some terrestrial cells. If a mobile terminal traversing the region arrives to the border-line or limits of a regular cellular ground service, network transition occurs and the satellite system continues the mobile coverage. To adequately assess the boundaries of service of a mobile satellite system an a cellular network within an integrated environment, this paper provides an optimized scheme to predict when a network transition may be necessary. Under the assumption of a classified propagation phenomenon and Lognormal shadowing, the study applies an analytical approach to estimate the location of a mobile terminal based on a reception of the signal strength emitted by a base station.
Statistical mechanics of cellular automata
International Nuclear Information System (INIS)
Wolfram, S.
1983-01-01
Cellular automata are used as simple mathematical models to investigate self-organization in statistical mechanics. A detailed analysis is given of ''elementary'' cellular automata consisting of a sequence of sites with values 0 or 1 on a line, with each site evolving deterministically in discrete time steps according to p definite rules involving the values of its nearest neighbors. With simple initial configurations, the cellular automata either tend to homogeneous states, or generate self-similar patterns with fractal dimensions approx. =1.59 or approx. =1.69. With ''random'' initial configurations, the irreversible character of the cellular automaton evolution leads to several self-organization phenomena. Statistical properties of the structures generated are found to lie in two universality classes, independent of the details of the initial state or the cellular automaton rules. More complicated cellular automata are briefly considered, and connections with dynamical systems theory and the formal theory of computation are discussed
Khan, Muhammad Sadiq Ali; Yousuf, Sidrah
2016-03-01
Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.
Quantification of fetal heart rate regularity using symbolic dynamics
van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.
2007-03-01
Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to
Probabilistic cellular automata: Some statistical mechanical considerations
International Nuclear Information System (INIS)
Lebowitz, J.L.; Maes, C.; Speer, E.R.
1990-01-01
Spin systems evolving in continuous or discrete time under the action of stochastic dynamics are used to model phenomena as diverse as the structure of alloys and the functioning of neural networks. While in some cases the dynamics are secondary, designed to produce a specific stationary measure whose properties one is interested in studying, there are other cases in which the only available information is the dynamical rule. Prime examples of the former are computer simulations, via Glauber dynamics, of equilibrium Gibbs measures with a specified interaction potential. Examples of the latter include various types of majority rule dynamics used as models for pattern recognition and for error-tolerant computations. The present note discusses ways in which techniques found useful in equilibrium statistical mechanics can be applied to a particular class of models of the latter types. These are cellular automata with noise: systems in which the spins are updated stochastically at integer times, simultaneously at all sites of some regular lattice. These models were first investigated in detail in the Soviet literature of the late sixties and early seventies. They are now generally referred to as Stochastic or Probabilistic Cellular Automata (PCA), and may be considered to include deterministic automata (CA) as special limits. 16 refs., 3 figs
Statistical regularities in the rank-citation profile of scientists.
Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro
2011-01-01
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
47 CFR 22.909 - Cellular markets.
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Cellular markets. 22.909 Section 22.909... Cellular Radiotelephone Service § 22.909 Cellular markets. Cellular markets are standard geographic areas used by the FCC for administrative convenience in the licensing of cellular systems. Cellular markets...
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Directory of Open Access Journals (Sweden)
Patrick W. Keeley
2014-10-01
Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.
Cellular telephone use among primary school children in Germany
International Nuclear Information System (INIS)
Boehler, Eva; Schuez, Joachim
2004-01-01
Background: There is some concern about potential health risks of cellular telephone use to children. We assessed data on how many children own a cellular telephone and on how often they use it in a population-based sample. Methods: We carried out a cross-sectional study among children in their fourth elementary school year, with a median-age of 10 years. The study was carried out in Mainz (Germany), a city with about 200,000 inhabitants. The study base comprised all 37 primary schools in Mainz and near surroundings. Altogether, 1933 children from 34 primary schools took part in the survey (participation rate of 87.8%). Results: Roughly a third of all children (n = 671, 34.7%) reported to own a cellular telephone. Overall, 119 (6.2%) children used a cellular telephone for making calls at least once a day, 123 (6.4%) used it several times a week and 876 (45.3%) children used it only once in a while. The remaining 805 (41.6%) children had never used a cellular telephone. The probability of owning a cellular telephone among children was associated with older age, being male, having no siblings, giving full particulars to height and weight, more time spent watching TV and playing computer games, being picked up by their parents from school by car (instead of walking or cycling) and going to bed late. The proportion of cellular telephone owners was somewhat higher in classes with more children from socially disadvantaged families. Conclusions: Our study shows that both ownership of a cellular telephone as well as the regular use of it are already quite frequent among children in the fourth grade of primary school. With regard to potential long-term effects, we recommend follow-up studies with children
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
MSAT and cellular hybrid networking
Baranowsky, Patrick W., II
Westinghouse Electric Corporation is developing both the Communications Ground Segment and the Series 1000 Mobile Phone for American Mobile Satellite Corporation's (AMSC's) Mobile Satellite (MSAT) system. The success of the voice services portion of this system depends, to some extent, upon the interoperability of the cellular network and the satellite communication circuit switched communication channels. This paper will describe the set of user-selectable cellular interoperable modes (cellular first/satellite second, etc.) provided by the Mobile Phone and described how they are implemented with the ground segment. Topics including roaming registration and cellular-to-satellite 'seamless' call handoff will be discussed, along with the relevant Interim Standard IS-41 Revision B Cellular Radiotelecommunications Intersystem Operations and IOS-553 Mobile Station - Land Station Compatibility Specification.
Cellular automata analysis and applications
Hadeler, Karl-Peter
2017-01-01
This book focuses on a coherent representation of the main approaches to analyze the dynamics of cellular automata. Cellular automata are an inevitable tool in mathematical modeling. In contrast to classical modeling approaches as partial differential equations, cellular automata are straightforward to simulate but hard to analyze. In this book we present a review of approaches and theories that allow the reader to understand the behavior of cellular automata beyond simulations. The first part consists of an introduction of cellular automata on Cayley graphs, and their characterization via the fundamental Cutis-Hedlund-Lyndon theorems in the context of different topological concepts (Cantor, Besicovitch and Weyl topology). The second part focuses on classification results: What classification follows from topological concepts (Hurley classification), Lyapunov stability (Gilman classification), and the theory of formal languages and grammars (Kůrka classification). These classifications suggest to cluster cel...
MIMO Communication for Cellular Networks
Huang, Howard; Venkatesan, Sivarama
2012-01-01
As the theoretical foundations of multiple-antenna techniques evolve and as these multiple-input multiple-output (MIMO) techniques become essential for providing high data rates in wireless systems, there is a growing need to understand the performance limits of MIMO in practical networks. To address this need, MIMO Communication for Cellular Networks presents a systematic description of MIMO technology classes and a framework for MIMO system design that takes into account the essential physical-layer features of practical cellular networks. In contrast to works that focus on the theoretical performance of abstract MIMO channels, MIMO Communication for Cellular Networks emphasizes the practical performance of realistic MIMO systems. A unified set of system simulation results highlights relative performance gains of different MIMO techniques and provides insights into how best to use multiple antennas in cellular networks under various conditions. MIMO Communication for Cellular Networks describes single-user,...
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning.
Sohn, Insoo; Liu, Huaping; Ansari, Nirwan
2015-01-01
An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction.
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
Programmable cellular arrays. Faults testing and correcting in cellular arrays
International Nuclear Information System (INIS)
Cercel, L.
1978-03-01
A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
High-strength cellular ceramic composites with 3D microarchitecture.
Bauer, Jens; Hengsbach, Stefan; Tesari, Iwiza; Schwaiger, Ruth; Kraft, Oliver
2014-02-18
To enhance the strength-to-weight ratio of a material, one may try to either improve the strength or lower the density, or both. The lightest solid materials have a density in the range of 1,000 kg/m(3); only cellular materials, such as technical foams, can reach considerably lower values. However, compared with corresponding bulk materials, their specific strength generally is significantly lower. Cellular topologies may be divided into bending- and stretching-dominated ones. Technical foams are structured randomly and behave in a bending-dominated way, which is less weight efficient, with respect to strength, than stretching-dominated behavior, such as in regular braced frameworks. Cancellous bone and other natural cellular solids have an optimized architecture. Their basic material is structured hierarchically and consists of nanometer-size elements, providing a benefit from size effects in the material strength. Designing cellular materials with a specific microarchitecture would allow one to exploit the structural advantages of stretching-dominated constructions as well as size-dependent strengthening effects. In this paper, we demonstrate that such materials may be fabricated. Applying 3D laser lithography, we produced and characterized micro-truss and -shell structures made from alumina-polymer composite. Size-dependent strengthening of alumina shells has been observed, particularly when applied with a characteristic thickness below 100 nm. The presented artificial cellular materials reach compressive strengths up to 280 MPa with densities well below 1,000 kg/m(3).
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
Representing and computing regular languages on massively parallel networks
Energy Technology Data Exchange (ETDEWEB)
Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.
Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali
2011-02-01
Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.
Energy Technology Data Exchange (ETDEWEB)
Wu, A Y; Rosenfeld, A
1983-10-01
A cellular pyramid is an exponentially tapering stack of arrays of processors (cells), where each cell is connected to its neighbors (siblings) on its own level, to a parent on the level above, and to its children on the level below. It is shown that in some situations, if information flows top-down only, from fathers to sons, then a cellular pyramid may be no faster than a one-level cellular array; but it may be possible to use simpler cells in the pyramid case. 23 references.
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
Entanglement in coined quantum walks on regular graphs
International Nuclear Information System (INIS)
Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L
2005-01-01
Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial
GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES
International Nuclear Information System (INIS)
Rogers, Adam; Fiege, Jason D.
2012-01-01
Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.
Regularized inversion of controlled source and earthquake data
International Nuclear Information System (INIS)
Ramachandran, Kumar
2012-01-01
Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)
Directory of Open Access Journals (Sweden)
Giovanni Dalmasso
Full Text Available Mitochondria are semi-autonomous organelles that supply energy for cellular biochemistry through oxidative phosphorylation. Within a cell, hundreds of mobile mitochondria undergo fusion and fission events to form a dynamic network. These morphological and mobility dynamics are essential for maintaining mitochondrial functional homeostasis, and alterations both impact and reflect cellular stress states. Mitochondrial homeostasis is further dependent on production (biogenesis and the removal of damaged mitochondria by selective autophagy (mitophagy. While mitochondrial function, dynamics, biogenesis and mitophagy are highly-integrated processes, it is not fully understood how systemic control in the cell is established to maintain homeostasis, or respond to bioenergetic demands. Here we used agent-based modeling (ABM to integrate molecular and imaging knowledge sets, and simulate population dynamics of mitochondria and their response to environmental energy demand. Using high-dimensional parameter searches we integrated experimentally-measured rates of mitochondrial biogenesis and mitophagy, and using sensitivity analysis we identified parameter influences on population homeostasis. By studying the dynamics of cellular subpopulations with distinct mitochondrial masses, our approach uncovered system properties of mitochondrial populations: (1 mitochondrial fusion and fission activities rapidly establish mitochondrial sub-population homeostasis, and total cellular levels of mitochondria alter fusion and fission activities and subpopulation distributions; (2 restricting the directionality of mitochondrial mobility does not alter morphology subpopulation distributions, but increases network transmission dynamics; and (3 maintaining mitochondrial mass homeostasis and responding to bioenergetic stress requires the integration of mitochondrial dynamics with the cellular bioenergetic state. Finally, (4 our model suggests sources of, and stress conditions
Numerical calculations of effective elastic properties of two cellular structures
International Nuclear Information System (INIS)
Tuncer, Enis
2005-01-01
Young's moduli of regular two-dimensional truss-like and eye-shaped structures are simulated using the finite element method. The structures are idealizations of soft polymeric materials used in ferro-electret applications. In the simulations, the length scales of the smallest representative units are varied, which changes the dimensions of the cell walls in the structures. A power-law expression with a quadratic as the exponent term is proposed for the effective Young's moduli of the systems as a function of the solid volume fraction. The data are divided into three regions with respect to the volume fraction: low, intermediate and high. The parameters of the proposed power-law expression in each region are later represented as a function of the structural parameters, the unit-cell dimensions. The expression presented can be used to predict a structure/property relationship in materials with similar cellular structures. The contribution of the cell-wall thickness to the elastic properties becomes significant at concentrations >0.15. The cell-wall thickness is the most significant factor in predicting the effective Young's modulus of regular cellular structures at high volume fractions of solid. At lower concentrations of solid, the eye-shaped structure yields a lower Young's modulus than a truss-like structure with similar anisotropy. Comparison of the numerical results with those of experimental data for poly(propylene) show good agreement regarding the influence of cell-wall thickness on elastic properties of thin cellular films
Additive Cellular Automata and Volume Growth
Directory of Open Access Journals (Sweden)
Thomas B. Ward
2000-08-01
Full Text Available Abstract: A class of dynamical systems associated to rings of S-integers in rational function fields is described. General results about these systems give a rather complete description of the well-known dynamics in one-dimensional additive cellular automata with prime alphabet, including simple formulÃƒÂ¦ for the topological entropy and the number of periodic configurations. For these systems the periodic points are uniformly distributed along some subsequence with respect to the maximal measure, and in particular are dense. Periodic points may be constructed arbitrarily close to a given configuration, and rationality of the dynamical zeta function is characterized. Throughout the emphasis is to place this particular family of cellular automata into the wider context of S-integer dynamical systems, and to show how the arithmetic of rational function fields determines their behaviour. Using a covering space the dynamics of additive cellular automata are related to a form of hyperbolicity in completions of rational function fields. This expresses the topological entropy of the automata directly in terms of volume growth in the covering space.
Inter-cellular transport of ran GTPase.
Directory of Open Access Journals (Sweden)
Deepak Khuperkar
Full Text Available Ran, a member of the Ras-GTPase superfamily, has a well-established role in regulating the transport of macromolecules across the nuclear envelope (NE. Ran has also been implicated in mitosis, cell cycle progression, and NE formation. Over-expression of Ran is associated with various cancers, although the molecular mechanism underlying this phenomenon is unclear. Serendipitously, we found that Ran possesses the ability to move from cell-to-cell when transiently expressed in mammalian cells. Moreover, we show that the inter-cellular transport of Ran is GTP-dependent. Importantly, Ran displays a similar distribution pattern in the recipient cells as that in the donor cell and co-localizes with the Ran binding protein Nup358 (also called RanBP2. Interestingly, leptomycin B, an inhibitor of CRM1-mediated export, or siRNA mediated depletion of CRM1, significantly impaired the inter-cellular transport of Ran, suggesting a function for CRM1 in this process. These novel findings indicate a possible role for Ran beyond nucleo-cytoplasmic transport, with potential implications in inter-cellular communication and cancers.
Cellular senescence and organismal aging.
Jeyapalan, Jessie C; Sedivy, John M
2008-01-01
Cellular senescence, first observed and defined using in vitro cell culture studies, is an irreversible cell cycle arrest which can be triggered by a variety of factors. Emerging evidence suggests that cellular senescence acts as an in vivo tumor suppression mechanism by limiting aberrant proliferation. It has also been postulated that cellular senescence can occur independently of cancer and contribute to the physiological processes of normal organismal aging. Recent data have demonstrated the in vivo accumulation of senescent cells with advancing age. Some characteristics of senescent cells, such as the ability to modify their extracellular environment, could play a role in aging and age-related pathology. In this review, we examine current evidence that links cellular senescence and organismal aging.
Origami interleaved tube cellular materials
International Nuclear Information System (INIS)
Cheung, Kenneth C; Tachi, Tomohiro; Calisch, Sam; Miura, Koryo
2014-01-01
A novel origami cellular material based on a deployable cellular origami structure is described. The structure is bi-directionally flat-foldable in two orthogonal (x and y) directions and is relatively stiff in the third orthogonal (z) direction. While such mechanical orthotropicity is well known in cellular materials with extruded two dimensional geometry, the interleaved tube geometry presented here consists of two orthogonal axes of interleaved tubes with high interfacial surface area and relative volume that changes with fold-state. In addition, the foldability still allows for fabrication by a flat lamination process, similar to methods used for conventional expanded two dimensional cellular materials. This article presents the geometric characteristics of the structure together with corresponding kinematic and mechanical modeling, explaining the orthotropic elastic behavior of the structure with classical dimensional scaling analysis. (paper)
Origami interleaved tube cellular materials
Cheung, Kenneth C.; Tachi, Tomohiro; Calisch, Sam; Miura, Koryo
2014-09-01
A novel origami cellular material based on a deployable cellular origami structure is described. The structure is bi-directionally flat-foldable in two orthogonal (x and y) directions and is relatively stiff in the third orthogonal (z) direction. While such mechanical orthotropicity is well known in cellular materials with extruded two dimensional geometry, the interleaved tube geometry presented here consists of two orthogonal axes of interleaved tubes with high interfacial surface area and relative volume that changes with fold-state. In addition, the foldability still allows for fabrication by a flat lamination process, similar to methods used for conventional expanded two dimensional cellular materials. This article presents the geometric characteristics of the structure together with corresponding kinematic and mechanical modeling, explaining the orthotropic elastic behavior of the structure with classical dimensional scaling analysis.
Cellular Angiofibroma of the Nasopharynx.
Erdur, Zülküf Burak; Yener, Haydar Murat; Yilmaz, Mehmet; Karaaltin, Ayşegül Batioğlu; Inan, Hakki Caner; Alaskarov, Elvin; Gozen, Emine Deniz
2017-11-01
Angiofibroma is a common tumor of the nasopharynx region but cellular type is extremely rare in head and neck. A 13-year-old boy presented with frequent epistaxis and nasal obstruction persisting for 6 months. According to the clinical symptoms and imaging studies juvenile angiofibroma was suspected. Following angiographic embolization total excision of the lesion by midfacial degloving approach was performed. Histological examination revealed that the tumor consisted of staghorn blood vessels and irregular fibrous stroma. Stellate fibroblasts with small pyknotic to large vesicular nuclei were seen in a highly cellular stroma. These findings identified cellular angiofibroma mimicking juvenile angiofibroma. This article is about a very rare patient of cellular angiofibroma of nasopharynx.
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Determine point-to-point networking interactions using regular expressions
Directory of Open Access Journals (Sweden)
Konstantin S. Deev
2015-06-01
Full Text Available As Internet growth and becoming more popular, the number of concurrent data flows start to increasing, which makes sense in bandwidth requested. Providers and corporate customers need ability to identify point-to-point interactions. The best is to use special software and hardware implementations that distribute the load in the internals of the complex, using the principles and approaches, in particular, described in this paper. This paper represent the principles of building system, which searches for a regular expression match using computing on graphics adapter in server station. A significant computing power and capability to parallel execution on modern graphic processor allows inspection of large amounts of data through sets of rules. Using the specified characteristics can lead to increased computing power in 30…40 times compared to the same setups on the central processing unit. The potential increase in bandwidth capacity could be used in systems that provide packet analysis, firewalls and network anomaly detectors.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Effects of Initial Symmetry on the Global Symmetry of One-Dimensional Legal Cellular Automata
Directory of Open Access Journals (Sweden)
Ikuko Tanaka
2015-09-01
Full Text Available To examine the development of pattern formation from the viewpoint of symmetry, we applied a two-dimensional discrete Walsh analysis to a one-dimensional cellular automata model under two types of regular initial conditions. The amount of symmetropy of cellular automata (CA models under regular and random initial conditions corresponds to three Wolfram’s classes of CAs, identified as Classes II, III, and IV. Regular initial conditions occur in two groups. One group that makes a broken, regular pattern formation has four types of symmetry, whereas the other group that makes a higher hierarchy pattern formation has only two types. Additionally, both final pattern formations show an increased amount of symmetropy as time passes. Moreover, the final pattern formations are affected by iterations of base rules of CA models of chaos dynamical systems. The growth design formations limit possibilities: the ratio of developing final pattern formations under a regular initial condition decreases in the order of Classes III, II, and IV. This might be related to the difference in degree in reference to surrounding conditions. These findings suggest that calculations of symmetries of the structures of one-dimensional cellular automata models are useful for revealing rules of pattern generation for animal bodies.
A cellular automata model of bone formation.
Van Scoy, Gabrielle K; George, Estee L; Opoku Asantewaa, Flora; Kerns, Lucy; Saunders, Marnie M; Prieto-Langarica, Alicia
2017-04-01
Bone remodeling is an elegantly orchestrated process by which osteocytes, osteoblasts and osteoclasts function as a syncytium to maintain or modify bone. On the microscopic level, bone consists of cells that create, destroy and monitor the bone matrix. These cells interact in a coordinated manner to maintain a tightly regulated homeostasis. It is this regulation that is responsible for the observed increase in bone gain in the dominant arm of a tennis player and the observed increase in bone loss associated with spaceflight and osteoporosis. The manner in which these cells interact to bring about a change in bone quality and quantity has yet to be fully elucidated. But efforts to understand the multicellular complexity can ultimately lead to eradication of metabolic bone diseases such as osteoporosis and improved implant longevity. Experimentally validated mathematical models that simulate functional activity and offer eventual predictive capabilities offer tremendous potential in understanding multicellular bone remodeling. Here we undertake the initial challenge to develop a mathematical model of bone formation validated with in vitro data obtained from osteoblastic bone cells induced to mineralize and quantified at 26 days of culture. A cellular automata model was constructed to simulate the in vitro characterization. Permutation tests were performed to compare the distribution of the mineralization in the cultures and the distribution of the mineralization in the mathematical models. The results of the permutation test show the distribution of mineralization from the characterization and mathematical model come from the same probability distribution, therefore validating the cellular automata model. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Jinping Tang
2017-01-01
Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
Regularization destriping of remote sensing imagery
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
The Regularity of Optimal Irrigation Patterns
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
Singular tachyon kinks from regular profiles
International Nuclear Information System (INIS)
Copeland, E.J.; Saffin, P.M.; Steer, D.A.
2003-01-01
We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately
Two-pass greedy regular expression parsing
DEFF Research Database (Denmark)
Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse
2013-01-01
We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Wang, Jim Jing-Yan
2014-09-20
Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Incremental projection approach of regularization for inverse problems
Energy Technology Data Exchange (ETDEWEB)
Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)
2016-10-15
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.
Dimensional regularization and analytical continuation at finite temperature
International Nuclear Information System (INIS)
Chen Xiangjun; Liu Lianshou
1998-01-01
The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
2011-01-01
The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.
The long Tramp from Cellular Pathology to Molecular Pathology
Directory of Open Access Journals (Sweden)
Hans Guski
2017-05-01
Derivatives: The observation of principal identity of biological meaningful elements can be agglutinated to a ‘general theory of live’ and its manifestation. All of the investigated elements posses the same regularities, which are altered, destroyed or newly built by external influences such as disease, physical and psychological forces. Not all magnification levels that display with these elements are of the same significance. Already Virchow suggested that ‘smaller elements (molecules might be responsible for changes that are visible ‘in larger elements’ (at cellular level. The reflection on these ideas can be associated with the implementation of molecular techniques which has been developed in the 20th century and are still ongoing today. Perspectives: Thus, cellular and molecular pathology can be integrated under one umbrella. This umbrella will lead to newly man-formed structures, such as artificial DNA and gene components or functional chip implantations.
Exclusion of children with intellectual disabilities from regular ...
African Journals Online (AJOL)
Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...
39 CFR 6.1 - Regular meetings, annual meeting.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
5 CFR 551.421 - Regular working hours.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...
20 CFR 226.35 - Deductions from regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...
20 CFR 226.34 - Divorced spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...
20 CFR 226.14 - Employee regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
Cellular-based preemption system
Bachelder, Aaron D. (Inventor)
2011-01-01
A cellular-based preemption system that uses existing cellular infrastructure to transmit preemption related data to allow safe passage of emergency vehicles through one or more intersections. A cellular unit in an emergency vehicle is used to generate position reports that are transmitted to the one or more intersections during an emergency response. Based on this position data, the one or more intersections calculate an estimated time of arrival (ETA) of the emergency vehicle, and transmit preemption commands to traffic signals at the intersections based on the calculated ETA. Additional techniques may be used for refining the position reports, ETA calculations, and the like. Such techniques include, without limitation, statistical preemption, map-matching, dead-reckoning, augmented navigation, and/or preemption optimization techniques, all of which are described in further detail in the above-referenced patent applications.
Novel Materials for Cellular Nanosensors
DEFF Research Database (Denmark)
Sasso, Luigi
The monitoring of cellular behavior is useful for the advancement of biomedical diagnostics, drug development and the understanding of a cell as the main unit of the human body. Micro- and nanotechnology allow for the creation of functional devices that enhance the study of cellular dynamics...... modifications for electrochemical nanosensors for the detection of analytes released from cells. Two type of materials were investigated, each pertaining to the two different aspects of such devices: peptide nanostructures were studied for the creation of cellular sensing substrates that mimic in vivo surfaces...... and that offer advantages of functionalization, and conducting polymers were used as electrochemical sensor surface modifications for increasing the sensitivity towards relevant analytes, with focus on the detection of dopamine released from cells via exocytosis. Vertical peptide nanowires were synthesized from...
Cellular membrane trafficking of mesoporous silica nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Fang, I-Ju [Iowa State Univ., Ames, IA (United States)
2012-01-01
the specific organelle that mesoporous silica nanoparticles could approach via the identification of harvested proteins from exocytosis process. Based on the study of endo- and exocytosis behavior of mesoporous silica nanoparticle materials, we can design smarter drug delivery vehicles for cancer therapy that can be effectively controlled. The destination, uptake efficiency and the cellular distribution of mesoporous silica nanoparticle materials can be programmable. As a result, release mechanism and release rate of drug delivery systems can be a well-controlled process. The deep investigation of an endo- and exocytosis study of mesoporous silica nanoparticle materials promotes the development of drug delivery applications.
Three-dimensional analysis of cellular microstructures by computer simulation
International Nuclear Information System (INIS)
Hanson, K.; Morris, J.W. Jr.
1977-06-01
For microstructures of the ''cellular'' type (isotropic growth from a distribution of nuclei which form simultaneously), it is possible to construct an efficient code which will completely analyze the microstructure in three dimensions. Such a computer code for creating and storing the connected graph was constructed
On Optimal Geographical Caching in Heterogeneous Cellular Networks
Serbetci, Berksan; Goseling, Jasper
2017-01-01
In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit
Global properties of cellular automata
International Nuclear Information System (INIS)
Jen, E.
1986-01-01
Cellular automata are discrete mathematical systems that generate diverse, often complicated, behavior using simple deterministic rules. Analysis of the local structure of these rules makes possible a description of the global properties of the associated automata. A class of cellular automata that generate infinitely many aperoidic temporal sequences is defined,a s is the set of rules for which inverses exist. Necessary and sufficient conditions are derived characterizing the classes of ''nearest-neighbor'' rules for which arbitrary finite initial conditions (i) evolve to a homogeneous state; (ii) generate at least one constant temporal sequence
Cellular structures with interconnected microchannels
Shaefer, Robert Shahram; Ghoniem, Nasr M.; Williams, Brian
2018-01-30
A method for fabricating a cellular tritium breeder component includes obtaining a reticulated carbon foam skeleton comprising a network of interconnected ligaments. The foam skeleton is then melt-infiltrated with a tritium breeder material, for example, lithium zirconate or lithium titanate. The foam skeleton is then removed to define a cellular breeder component having a network of interconnected tritium purge channels. In an embodiment the ligaments of the foam skeleton are enlarged by adding carbon using chemical vapor infiltration (CVI) prior to melt-infiltration. In an embodiment the foam skeleton is coated with a refractory material, for example, tungsten, prior to melt infiltration.
A Markov Process Inspired Cellular Automata Model of Road Traffic
Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng
2008-01-01
To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...
The research on the failure regularity of GM counter tubes
International Nuclear Information System (INIS)
Li Jiyuan; Huai Guangli; Xie Bo; Zhang Hao
2002-01-01
The reliability of GM counter tubes should be described by useful time before failure-life and failure rate during life. A new method to study the failure regularity of GM counter tubes is advanced and adopted. The essential point of the method is that after the GM counter tubes of the instruments in use is tested, both the performance parameters and other information of the GM counter tubes and the instruments collected are recorded. Then database is created. Failure criterion is ascertained. The GM counter tubes are inspected to determine whether they are failure. Failure mode should be decided if the GM counter tubes failure. The GM counter tubes with the same useful year come together to make up a subsample. According to the relevant information, the number of the subsample is restored to the number of the sample that initially put into use. Then the number of failure sample is counted and at the same time the distribution of failure mode is got. The parameter m, γ, t 0 of Weibull distribution function are calculated with method of linear fit. Thus mean life, failure rate and other character values are obtained. Using this method, useful life and failure rate are determined. The conclusion is that the useful life is 18-20 years and the failure rate is 5 x 10 -6 and 4 x 10 -6 /h respectively during the course
MRI reconstruction with joint global regularization and transform learning.
Tanc, A Korhan; Eksioglu, Ender M
2016-10-01
Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
Regular and platform switching: bone stress analysis varying implant type.
Gurgel-Juarez, Nália Cecília; de Almeida, Erika Oliveira; Rocha, Eduardo Passos; Freitas, Amílcar Chagas; Anchieta, Rodolfo Bruniera; de Vargas, Luis Carlos Merçon; Kina, Sidney; França, Fabiana Mantovani Gomes
2012-04-01
This study aimed to evaluate stress distribution on peri-implant bone simulating the influence of platform switching in external and internal hexagon implants using three-dimensional finite element analysis. Four mathematical models of a central incisor supported by an implant were created: External Regular model (ER) with 5.0 mm × 11.5 mm external hexagon implant and 5.0 mm abutment (0% abutment shifting), Internal Regular model (IR) with 4.5 mm × 11.5 mm internal hexagon implant and 4.5 mm abutment (0% abutment shifting), External Switching model (ES) with 5.0 mm × 11.5 mm external hexagon implant and 4.1 mm abutment (18% abutment shifting), and Internal Switching model (IS) with 4.5 mm × 11.5 mm internal hexagon implant and 3.8 mm abutment (15% abutment shifting). The models were created by SolidWorks software. The numerical analysis was performed using ANSYS Workbench. Oblique forces (100 N) were applied to the palatal surface of the central incisor. The maximum (σ(max)) and minimum (σ(min)) principal stress, equivalent von Mises stress (σ(vM)), and maximum principal elastic strain (ε(max)) values were evaluated for the cortical and trabecular bone. For cortical bone, the highest stress values (σ(max) and σ(vm) ) (MPa) were observed in IR (87.4 and 82.3), followed by IS (83.3 and 72.4), ER (82 and 65.1), and ES (56.7 and 51.6). For ε(max), IR showed the highest stress (5.46e-003), followed by IS (5.23e-003), ER (5.22e-003), and ES (3.67e-003). For the trabecular bone, the highest stress values (σ(max)) (MPa) were observed in ER (12.5), followed by IS (12), ES (11.9), and IR (4.95). For σ(vM), the highest stress values (MPa) were observed in IS (9.65), followed by ER (9.3), ES (8.61), and IR (5.62). For ε(max) , ER showed the highest stress (5.5e-003), followed by ES (5.43e-003), IS (3.75e-003), and IR (3.15e-003). The influence of platform switching was more evident for cortical bone than for trabecular bone, mainly for the external hexagon
Multimodal manifold-regularized transfer learning for MCI conversion prediction.
Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang
2015-12-01
As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Color correction optimization with hue regularization
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Library search with regular reflectance IR spectra
International Nuclear Information System (INIS)
Staat, H.; Korte, E.H.; Lampen, P.
1989-01-01
Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)
Regularities of praseodymium oxide dissolution in acids
International Nuclear Information System (INIS)
Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.
1989-01-01
The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented
Regular expressions compiler and some applications
International Nuclear Information System (INIS)
Saldana A, H.
1978-01-01
We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)
Sparsity-regularized HMAX for visual recognition.
Directory of Open Access Journals (Sweden)
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
Quantum implications of a scale invariant regularization
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
An improved cellular automaton method to model multispecies biofilms.
Tang, Youneng; Valocchi, Albert J
2013-10-01
Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cellular uptake of metallated cobalamins
DEFF Research Database (Denmark)
Tran, Mai Thanh Quynh; Stürup, Stefan; Lambert, Ian Henry
2016-01-01
Cellular uptake of vitamin B12-cisplatin conjugates was estimated via detection of their metal constituents (Co, Pt, and Re) by inductively coupled plasma mass spectrometry (ICP-MS). Vitamin B12 (cyano-cob(iii)alamin) and aquo-cob(iii)alamin [Cbl-OH2](+), which differ in the β-axial ligands (CN...
Repaglinide at a cellular level
DEFF Research Database (Denmark)
Krogsgaard Thomsen, M; Bokvist, K; Høy, M
2002-01-01
To investigate the hormonal and cellular selectivity of the prandial glucose regulators, we have undertaken a series of experiments, in which we characterised the effects of repaglinide and nateglinide on ATP-sensitive potassium ion (KATP) channel activity, membrane potential and exocytosis in ra...