International Nuclear Information System (INIS)
Anzengruber, Stephan W; Hofmann, Bernd; Ramlau, Ronny
2013-01-01
The convergence rates results in ℓ 1 -regularization when the sparsity assumption is narrowly missed, presented by Burger et al (2013 Inverse Problems 29 025013), are based on a crucial condition which requires that all basis elements belong to the range of the adjoint of the forward operator. Partly it was conjectured that such a condition is very restrictive. In this context, we study sparsity-promoting varieties of Tikhonov regularization for linear ill-posed problems with respect to an orthonormal basis in a separable Hilbert space using ℓ 1 and sublinear penalty terms. In particular, we show that the corresponding range condition is always satisfied for all basis elements if the problems are well-posed in a certain weaker topology and the basis elements are chosen appropriately related to an associated Gelfand triple. The Radon transform, Symm’s integral equation and linear integral operators of Volterra type are examples for such behaviour, which allows us to apply convergence rates results for non-sparse solutions, and we further extend these results also to the case of non-convex ℓ q -regularization with 0 < q < 1. (paper)
Directory of Open Access Journals (Sweden)
F. G. Lovshenko
2014-01-01
Full Text Available Experimentally determined regularities and mechanism of formation of structure of the mechanically alloyed compositions foundations on the basis of the widely applied in mechanical engineering metals – iron, nickel, aluminum, copper are given.
Directory of Open Access Journals (Sweden)
André da Silva Mello
2016-09-01
Full Text Available This paper aims at discussing the Children's Education organization within the Regular Curricular National Basis (BNCC, focusing on the permanencies and advances taking in relation to the precedent documents, and analyzing the presence of Physical Education in Children's Education from the assumptions that guide the Base, in interface with researches about pedagogical experiences with this field of knowledge. To do so, it carries out a documental-bibliographic analysis, using as sources the BNCC, the National Curricular Referential for Children's Education, the National Curricular Guidelines for Children's Education and academic-scientific productions belonging to the Physical Education area that approach Children's Education. In the analysis process, the work establishes categories which allow the interlocution among different sources used in this study. Data analyzed offers indications that the assumption present in the BNCC dialogue, not explicitly, with the movements of the curricular component and with the Physical Education academic-scientific production regarding Children's Education.
Learning Errors by Radial Basis Function Neural Networks and Regularization Networks
Czech Academy of Sciences Publication Activity Database
Neruda, Roman; Vidnerová, Petra
2009-01-01
Roč. 1, č. 2 (2009), s. 49-57 ISSN 2005-4262 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural network * RBF networks * regularization * learning Subject RIV: IN - Informatics, Computer Science http://www.sersc.org/journals/IJGDC/vol2_no1/5.pdf
International Nuclear Information System (INIS)
Randic, M.; Wilkins, C.L.
1979-01-01
Selected molecular data on alkanes have been reexamined in a search for general regularities in isomeric variations. In contrast to the prevailing approaches concerned with fitting data by searching for optimal parameterization, the present work is primarily aimed at established trends, i.e., searching for relative magnitudes and their regularities among the isomers. Such an approach is complementary to curve fitting or correlation seeking procedures. It is particularly useful when there are incomplete data which allow trends to be recognized but no quantitative correlation to be established. One proceeds by first ordering structures. One way is to consider molecular graphs and enumerate paths of different length as the basic graph invariant. It can be shown that, for several thermodynamic molecular properties, the number of paths of length two (p 2 ) and length three (p 3 ) are critical. Hence, an ordering based on p 2 and p 3 indicates possible trends and behavior for many molecular properties, some of which relate to others, some which do not. By considering a grid graph derived by attributing to each isomer coordinates (p 2 ,p 3 ) and connecting points along the coordinate axis, one obtains a simple presentation useful for isomer structural interrelations. This skeletal frame is one upon which possible trends for different molecular properties may be conveniently represented. The significance of the results and their conceptual value is discussed. 16 figures, 3 tables
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
Preventive maintenance basis: Volume 31 -- Relays -- timing. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1998-07-01
US nuclear power plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This document provides a program of preventive maintenance tasks suitable for application to timing relays. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used in conjunction with material from other sources, to develop a complete PM program or to improve an existing program
Preventive maintenance basis: Volume 30 -- Relays -- control. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1998-07-01
US nuclear power plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This document provides a program of preventive maintenance tasks suitable for application to control relays. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used in conjunction with material from other sources, to develop a complete PM program or to improve an existing program
Briner, Alexandra E; Barrangou, Rodolphe
2014-02-01
Clustered regularly interspaced short palindromic repeats (CRISPR) in combination with associated sequences (cas) constitute the CRISPR-Cas immune system, which uptakes DNA from invasive genetic elements as novel "spacers" that provide a genetic record of immunization events. We investigated the potential of CRISPR-based genotyping of Lactobacillus buchneri, a species relevant for commercial silage, bioethanol, and vegetable fermentations. Upon investigating the occurrence and diversity of CRISPR-Cas systems in Lactobacillus buchneri genomes, we observed a ubiquitous occurrence of CRISPR arrays containing a 36-nucleotide (nt) type II-A CRISPR locus adjacent to four cas genes, including the universal cas1 and cas2 genes and the type II signature gene cas9. Comparative analysis of CRISPR spacer content in 26 L. buchneri pickle fermentation isolates associated with spoilage revealed 10 unique locus genotypes that contained between 9 and 29 variable spacers. We observed a set of conserved spacers at the ancestral end, reflecting a common origin, as well as leader-end polymorphisms, reflecting recent divergence. Some of these spacers showed perfect identity with phage sequences, and many spacers showed homology to Lactobacillus plasmid sequences. Following a comparative analysis of sequences immediately flanking protospacers that matched CRISPR spacers, we identified a novel putative protospacer-adjacent motif (PAM), 5'-AAAA-3'. Overall, these findings suggest that type II-A CRISPR-Cas systems are valuable for genotyping of L. buchneri.
Final disposal of spent nuclear fuel - basis for site selection
International Nuclear Information System (INIS)
Anttila, P.
1995-05-01
International organizations, e.g. IAEA, have published several recommendations and guides for the safe disposal of radioactive waste. There are three major groups of issues affecting the site selection process, i.e. geological, environmental and socioeconomic. The first step of the site selection process is an inventory of potential host rock formations. After that, potential study areas are screened to identify sites for detailed investigations, prior to geological conditions and overall suitability for the safe disposal. This kind of stepwise site selection procedure has been used in Finland and in Sweden. A similar approach has been proposed in Canada, too. In accordance with the amendment to the Nuclear Energy Act, that entered into force in the beginning of 1995, Imatran Voima Oy has to make preparations for the final disposal of spent fuel in the Finnish bedrock. Relating to the possible site selection, the following geological factors, as internationally recommended and used in the Nordic countries, should be taken into account: topography, stability of bedrock, brokenness and fracturing of bedrock, size of bedrock block, rock type, predictability and natural resources. The bedrock of the Loviisa NPP site is a part of the Vyborg rapakivi massif. As a whole the rapakivi granite area forms a potential target area, although other rock types or areas cannot be excluded from possible site selection studies. (25 refs., 7 figs.)
SENTINEL trademark technical basis report for Limerick. Final report
International Nuclear Information System (INIS)
Burns, E.T.; Lee, L.K.; Mitman, J.T.; Vanover, D.E.; Wilson, D.K.
1997-12-01
PECO Energy in cooperation with the Electric Power Research Institute (EPRI) installed the SENTINEL trademark software at its Limerick Generating Station. This software incorporates models of the safety and support systems which are used to display the defense in depth present in the plant and a quantitative assessment of the plant risks during proposed on-line maintenance. During the past year, PECO Energy personnel have used this display to evaluate the safety of proposed on-line maintenance schedules. The report describes the motivation for and the development of the SENTINEL software. It describes the generation of Safety Function Assessment Trees and Plant Transient Assessment Trees and their use in evaluating the level of defense-in-depth of key plant safety functions and the susceptibility of the plant to critical transient events. Their results are displayed by color indicators ranging from green, through yellow and orange to red to show increasingly hazardous conditions. The report describes the use of the Limerick Probabilistic Safety Assessment within the SENTINEL code to calculate an instantaneous core damage frequency and the criteria by which this frequency is translated to a color indicator. Finally, the report describes the Performance Criteria Assessment which tracks and trends system/train unavailability to document conformance to the requirements of the Maintenance Rule
Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean
2010-05-01
Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients
Ukrainian National System of MC&A Training on Regular Basis at the George Kuzmych Training Center
International Nuclear Information System (INIS)
Kyryshchuk, V.; Gavrilyuk, V.; Drapey, S.; Romanova, O.; Levina, E.; Proskurin, D.; Gavrilyuk-Burakova, A.; Parkhomenko, V.; Van Dassen, L.; Delalic, Z.
2015-01-01
The George Kuzmych Training Center (GKTC) was created at the Kyiv Institute for Nuclear Research as a result of collaborative efforts between the United States and Ukraine in 1998. Later the European Commission (EC) and Sweden joined the USA supporting MC&A aspects of the GKTC activity. The GKTC was designated by the Ukrainian Government to provide the MPC&A training and methodological assistance to nuclear facilities and nuclear specialists. In order to increase the efficiency of State MC&A system an essential number of new regulations, norms and rules was developed demanding regular and more intensive MC&A experts training from the Regulatory Body of Ukraine and all nuclear facilities. For this purpose ten training courses were developed by the GKTC under the EC contract taking into account both specifics of Ukrainian nuclear facilities and expertise level of their personnel. Along with the NDA training laboratory created with the US DOE financial support and methodological assistance in 2003, a new surveillance and containment laboratory was created under the EC contract and with US DOE financial support as well. Moreover, under the EC contract the laboratory was equipped with the state-of-the-art and most advanced means of surveillance and containment strengthening even more the GKTC training opportunities. As a result, the MC&A experts from all nuclear facilities and Regulatory Body of Ukraine can regularly be trained practically on all MC&A issues. This paper briefly describes the practical efforts applied to improve Ukrainian MC&A systems both at the State and facility levels and real results on the way to develop the National System for MC&A regular training at the GKTC, problems encountered and their solution, comments, suggestions and recommendations for the future activity to promote and improve the nuclear security culture in Ukraine. (author)
Regular Recycling of Wood Ash to Prevent Waste Production (RecAsh). Technical Final Report
Energy Technology Data Exchange (ETDEWEB)
Andersson, Lars E-mail: lars.t.andersson@skogsstyreslen.se
2007-03-15
At present, the extraction of harvest residues is predicted to increase in Sweden and Finland. As an effect of the intensified harvesting, the export of nutrients and acid buffering substances from the growth site is also increased. Wood ash could be used to compensate forest soils for such losses. Most wood fuel ash is today often deposited in landfills. If the wood ash is recycled, wood energy is produced without any significant waste production. Ash recycling would therefore contribute to decreasing the production of waste, and to maintaining the chemical quality of forest waters and biological productivity of forest soils in the long term. The project has developed, analysed and demonstrated two regular ash-recycling systems. It has also distributed knowledge gathered about motives for ash recycling as well as technical and administrative solutions through a range of media (handbooks, workshops, field demonstrations, reports, web page and information videos). Hopefully, the project will contribute to decreasing waste problems related to bio-energy production in the EU at large. The project has been organised as a separate structure at the beneficiary and divided in four geographically defined subprojects, one in Finland and three in Sweden (Central Sweden, Northern Sweden, and South-western Sweden). The work in each subproject has been lead by a subproject leader. Each subproject has organised a regional reference group. A project steering committee has been established consisting of senior officials from all concerned partners. The project had nine main tasks with the following main expected deliverables and output: 1. Development of two complete full-scale ash-recycling systems; 2. Production of handbooks of the ash recycling system; 3. Ash classification study to support national actions for recommendations; 4. Organise regional demonstrations of various technical options for ash treatment and spreading; 5. Organise national seminars and demonstrations of
Medicare program; clarification of Medicare's accrual basis of accounting policy--HCFA. Final rule.
1995-06-27
This final rule revises the Medicare regulations to clarify the concept of "accrual basis of accounting" to indicate that expenses must be incurred by a provider of health care services before Medicare will pay its share of those expenses. This rule does not signify a change in policy but, rather, incorporates into the regulations Medicare's longstanding policy regarding the circumstances under which we recognize, for the purposes of program payment, a provider's claim for costs for which it has not actually expended funds during the current cost reporting period.
Grid fault and design-basis for wind turbines - Final report
DEFF Research Database (Denmark)
Hansen, Anca Daniela; Cutululis, Nicolaos Antonio; Markou, Helen
, have been performed and compared for two cases, i.e. one when the turbine is immediately disconnected from the grid when a grid fault occurs and one when the turbine is equipped with a fault ride-through controller and therefore it is able to remain connected to the grid during the grid fault......This is the final report of a Danish research project “Grid fault and design-basis for wind turbines”. The objective of this project has been to assess and analyze the consequences of the new grid connection requirements for the fatigue and ultimate structural loads of wind turbines....... The fulfillment of the grid connection requirements poses challenges for the design of both the electrical system and the mechanical structure of wind turbines. The development of wind turbine models and novel control strategies to fulfill the TSO’s requirements are of vital importance in this design. Dynamic...
Grid fault and design-basis for wind turbines. Final report
Energy Technology Data Exchange (ETDEWEB)
Hansen, A.D.; Cutululis, N.A.; Markou, H.; Soerensen, Poul; Iov, F.
2010-01-15
This is the final report of a Danish research project 'Grid fault and design-basis for wind turbines'. The objective of this project has been to assess and analyze the consequences of the new grid connection requirements for the fatigue and ultimate structural loads of wind turbines. The fulfillment of the grid connection requirements poses challenges for the design of both the electrical system and the mechanical structure of wind turbines. The development of wind turbine models and novel control strategies to fulfill the TSO's requirements are of vital importance in this design. Dynamic models and different fault ride-through control strategies have been developed and assessed in this project for three different wind turbine concepts (active stall wind turbine, variable speed doublyfed induction generator wind turbine, variable speed multipole permanent magnet wind turbine). A computer approach for the quantification of the wind turbines structural loads caused by the fault ride-through grid requirement, has been proposed and exemplified for the case of an active stall wind turbine. This approach relies on the combination of knowledge from complimentary simulation tools, which have expertise in different specialized design areas for wind turbines. In order to quantify the impact of the grid faults and grid requirements fulfillment on wind turbines structural loads and thus on their lifetime, a rainflow and a statistical analysis for fatigue and ultimate structural loads, respectively, have been performed and compared for two cases, i.e. one when the turbine is immediately disconnected from the grid when a grid fault occurs and one when the turbine is equipped with a fault ride-through controller and therefore it is able to remain connected to the grid during the grid fault. Different storm control strategies, that enable variable speed wind turbines to produce power at wind speeds higher than 25m/s and up to 50m/s without substantially increasing
Preventive maintenance basis: Volume 15 -- Rotary screw air compressors. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-07-01
US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. Volume 15 of the report provides a program of PM tasks suitable for application to rotary screw air compressors in nuclear power plants. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs
Preventive maintenance basis: Volume 1 -- Air-operated valves. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-07-01
US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. This document provides a program of PM tasks suitable for application to Air Operated Valves (AOV's) in nuclear power plants. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-07-01
US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. This volume 16 of the report provides a program of PM tasks suitable for application to power operated relief valves (PORV's) that are solenoid actuated. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs
Preventive maintenance basis: Volume 21 -- HVAC, air handling equipment. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-12-01
US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. Volume 21 of the report provides a program of PM tasks suitable for application to HVAC-Air Handling Equipment. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs
Preventive maintenance basis: Volume 37 -- Main turbine EHC hydraulics. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1998-11-01
US nuclear power plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This document provides a program of preventive maintenance tasks suitable for application to the main turbine EHC hydraulic fluid and associated components. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used in conjunction with material from other sources, to develop a complete PM program or to improve an existing program
Preventive maintenance basis: Volume 19 -- HVAC -- chillers and compressors. Final report
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-12-01
US nuclear power plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This document provides a program of preventive maintenance tasks suitable for application to HVAC -- Chillers and Compressors. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used in conjunction with material from other sources, to develop a complete PM program or to improve an existing program
Energy Technology Data Exchange (ETDEWEB)
Tschaplinski, T.J.; Tuskan, G.A. [Oak Ridge National Lab., TN (United States); Wierman, C. [Boise Cascade Corp., Wallula, WA (United States)
1997-04-01
The purpose of this cooperative effort was to assess the use of osmotically active compounds as molecular selection criteria for drought tolerance in Populus in a large-scale field trial. It is known that some plant species, and individuals within a plant species, can tolerate increasing stress associated with reduced moisture availability by accumulating solutes. The biochemical matrix of such metabolites varies among species and among individuals. The ability of Populus clones to tolerate drought has equal value to other fiber producers, i.e., the wood products industry, where irrigation is used in combination with other cultural treatments to obtain high dry weight yields. The research initially involved an assessment of drought stress under field conditions and characterization of changes in osmotic constitution among the seven clones across the six moisture levels. The near-term goal was to provide a mechanistic basis for clonal differences in productivity under various irrigation treatments over time.
Technical basis for the ITER final design report, cost review and safety analysis (FDR)
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-01
The ITER final design report, cost review and safety analysis (FDR) is the 4th major milestone, representing the progress made in the ITER Engineering Design Activities. With the approval of the Detailed Design Report (DDR), the design work was concentrated on the requirements of operation, with only relatively minor changes to design concepts of major components. The FDR is the culmination of almost 6 years collaborative design and supporting technical work by the ITER Joint Central Team and Home Teams under the terms of the ITER EDA Agreement. Refs, figs, tabs
Technical basis for the ITER final design report, cost review and safety analysis (FDR)
International Nuclear Information System (INIS)
1998-01-01
The ITER final design report, cost review and safety analysis (FDR) is the 4th major milestone, representing the progress made in the ITER Engineering Design Activities. With the approval of the Detailed Design Report (DDR), the design work was concentrated on the requirements of operation, with only relatively minor changes to design concepts of major components. The FDR is the culmination of almost 6 years collaborative design and supporting technical work by the ITER Joint Central Team and Home Teams under the terms of the ITER EDA Agreement
Green Train. Basis for a Scandinavian high-speed train concept. Final report, Pt. A
Energy Technology Data Exchange (ETDEWEB)
Froeidh, Oskar
2012-11-01
The Green Train (in Swedish 'Groena Taaget') is a high-speed train concept, that is economical, environmentally friendly and attractive to travellers. It is suited to specific Nordic conditions with a harsh winter climate, often varying demand and mixed passenger and freight operations on non-perfect track. The main proposal is a train for speeds up to 250 km/h equipped with carbody tilt for short travelling times on electrified mainlines. The concept is intended to be a flexible platform for long-distance and fast regional passenger trains, interoperable in Scandinavia, i.e. Denmark, Norway and Sweden. The Groena Taaget programme delivers a collection of ideas, proposals and technical solutions for rail operators, infrastructure managers and industry. This is part A of the final report, dealing with market, economy and service aspects, with an emphasis on the areas where research has been done within the Groena Taaget research and development programme. Passenger valuations and economy in train traffic exposed to competition are controlling factors in the design of the train concept. One important measure to achieve better economy in the train traffic with 15% lower total costs and the possibility to reduce fares is to use wide-bodied trains that can accommodate more seats with good comfort. Travel on some studied routes in Sweden may increase by 30% compared to today's express trains through shorter travelling times, lower fares and more direct connections, which are possible with shorter, flexible trainsets. Groena Taaget will be designed to give good punctuality even during peak load periods. Doors, interior design, luggage handling and vestibules with lifts for disabled travellers must be dimensioned for full trains. A well-considered design reduces dwell times and delays. Capacity utilisation on the lines increases with greater speed differences between express trains and slower trains in mixed traffic. Punctual stops and skip-stop operation
Ion Fast Ignition-Establishing a Scientific Basis for Inertial Fusion Energy --- Final Report
Energy Technology Data Exchange (ETDEWEB)
Stephens, Richard Burnite [General Atomics; Foord, Mark N. [Lawrence Livermore National Laboratory; Wei, Mingsheng [General Atomics; Beg, Farhat N. [University of California, San Diego; Schumacher, Douglass W. [The Ohio State University
2013-10-31
The Fast Ignition (FI) Concept for Inertial Confinement Fusion (ICF) has the potential to provide a significant advance in the technical attractiveness of Inertial Fusion Energy reactors. FI differs from conventional ?central hot spot? (CHS) target ignition by decoupling compression from heating: using a laser (or heavy ion beam or Z pinch) drive pulse (10?s of nanoseconds) to create a dense fuel and a second, much shorter (~10 picoseconds) high intensity pulse to ignite a small volume within the dense fuel. The compressed fuel is opaque to laser light. The ignition laser energy must be converted to a jet of energetic charged particles to deposit energy in the dense fuel. The original concept called for a spray of laser-generated hot electrons to deliver the energy; lack of ability to focus the electrons put great weight on minimizing the electron path. An alternative concept, proton-ignited FI, used those electrons as intermediaries to create a jet of protons that could be focused to the ignition spot from a more convenient distance. Our program focused on the generation and directing of the proton jet, and its transport toward the fuel, none of which were well understood at the onset of our program. We have developed new experimental platforms, diagnostic packages, computer modeling analyses, and taken advantage of the increasing energy available at laser facilities to create a self-consistent understanding of the fundamental physics underlying these issues. Our strategy was to examine the new physics emerging as we added the complexity necessary to use proton beams in an inertial fusion energy (IFE) application. From the starting point of a proton beam accelerated from a flat, isolated foil, we 1) curved it to focus the beam, 2) attached the foil to a superstructure, 3) added a side sheath to protect it from the surrounding plasma, and finally 4) studied the proton beam behavior as it passed through a protective end cap into plasma. We built up, as we proceeded
1999-09-27
Medicare policy provides that payroll taxes that a provider becomes obligated to remit to governmental agencies are included in allowable costs only in the cost reporting period in which payment (upon which the payroll taxes are based) is actually made to an employee. Therefore, for payroll accrued in 1 year but not paid until the next year, the associated payroll taxes are not an allowable cost until the next year. This final rule provides for an exception when payment would be made to the employee in the current year but for the fact the regularly scheduled payment date is after the end of the year. In that case, the rule requires allowance in the current year of accrued taxes on payroll that is accrued through the end of the year but not paid until the beginning of the next year, thus allowing accrued taxes on end-of-the year payroll in the same year that the accrual of the payroll itself is allowed. The effect of this rule is not on the allowability of cost but rather only on the timing of payment; that is, the cost of payroll taxes on end-of-the-year payroll is allowable in the current period rather than in the following period.
International Nuclear Information System (INIS)
1987-12-01
The National Institute of Radiological Sciences, Japan, has undertaken a special study of ''biological effects of tritium as a basis of research and development in nuclear fusion'' over a 5-year period from April 1981 through March 1986. This is a final report, covering incorporation and metabolism of tritium, physical, chemical, and cellular effects of tritium, tritium damage to the mammalian tissue, and human exposure to tritium. The report is organized into five chapters, including ''Study of incorporation of tritium into the living body and its in vivo behavior''; ''Physical and chemical studies for the determination of relative biological effectiveness''; ''Analytical study on biological effects of tritium in cultured mammalian cells''; ''Study of tritium effects on the mammalian tissue, germ cells, and cell transformation''; and ''Changes in the hemopoietic stem cells and lymphocyte subsets in humans after exposure to some internal emitters''. (Namekawa, K.)
International Nuclear Information System (INIS)
Bauer, C.
1982-01-01
The dissertation comprises two separate parts. The first part presents the basic conditions and concepts of the process leading to the development of a waste form, such as:origin, composition and characteristics of the high-level radioactive waste; evaluation of the methods available for the final disposal of radioactive waste, especially the disposal in a geological formation, including the resulting consequences for the conditions of state in the surroundings of the waste package; essential option for the conception of a waste form and presentation of the waste forms developed and examined on an international level up to now. The second part describes the production of a waste form on TiO 2 basis, in which calcined radioactive waste particles in the submillimeter range are embedded in a rutile matrix. That waste form is produced by uniaxial pressure sintering in the temperature range of 1223 K to 1423 K and pressures between 5 MPa and 20 MPa. Microstructure, mechanical properties and leaching rates of the waste form are presented. Moreover, a method is explained allowing compacting of the rutile matrix and also integration of a wasteless overpack of titanium or TiO 2 into the waste form. (orig.) [de
International Nuclear Information System (INIS)
2011-02-01
SKB will submit applications for permits and admissibility under the Environmental Act and under the Nuclear Activities Act to construct and operate a disposal facility for spent nuclear fuel at Forsmark. In the final repository the spent nuclear fuel from Swedish nuclear power plants is placed in order to protect human health and the environment against harmful effects of ionizing radiation. Construction and operation of the disposal facility in Forsmark will make an impact, give effects and consequences for the natural environment. Utilization of land for the construction of the facility and the impact on ground water as a result of groundwater drainage is expected to have negative consequences for the species included in species protection regulation. Thus, the planned activity require exemption from species protection regulation (SFS 2007:845). The purpose of this document is to provide a basis for an application for exemption under 14 paragraph species protection regulation from the prohibitions of 4, 6, 7 and 8 paragraph species protection regulation. A basis for the exemption application is that the proposed activity is considered to have an 'overriding public interest' prescribed in 14 paragraph species protection regulation. The document reports the impact, effects and consequences of the planned activities on species covered in the species protection regulation. The impact on protected species can be divided into two categories: - Direct effects on protected species and their habitats by utilization of the land. - Indirect effects on protected species and their habitats in the drainage of groundwater and the effect on groundwater levels. The document also includes a description of planned actions to prevent, restrict and compensate for the effects and consequences that the activity may cause. By applying for an exemption under 14 paragraph species protection regulation in a separate order from the application for permit according to chapters 9 and 11
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-07-01
US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. Volume 10 of the report provides a program of PM tasks suitable for application to high voltage (5kV and greater) electric motors in nuclear power plants. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs. Reactor Coolant Pumps motors (RCP's) are not excluded from this report in so far as good PM practices for motors of the appropriate class are concerned. However, the special auxiliary equipment normally associated with RCP's has not been included. Consequently, this report does not provide a complete PM program for RCP's. Industry and vendor programs for RCP's should be consulted for complete definition of RCP motor PM programs
International Nuclear Information System (INIS)
Worledge, D.; Hinchcliffe, G.
1997-12-01
US nuclear power plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This document provides a program of preventive maintenance tasks suitable for application to flooded lead-acid batteries. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. This document provides a program of preventive maintenance (PM) tasks suitable for application to flooded lead-acid batteries. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, system engineers, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs
Energy Technology Data Exchange (ETDEWEB)
Braathen, O.A.
1996-12-31
As part of the cooperation between Norway and the Czech Republic on environment protection, a project was carried out in Ostrava, Czech Republic, to transfer competence to Ostrava such that measurements of selected organic contaminants in air could be measured. The focus was on volatile organic compounds (VOC), polycyclic aromatic hydrocarbons (PAH), polychlorinated biphenyls (PBC) and dioxin. This work also included acquiring and establishing equipment and analysis methodology. This is the final report from the project. 9 figs., 12 tabs.
International Nuclear Information System (INIS)
Fujimura, Kimiya; Nagayama, Shigeru; Watarumi, Chikae; Toudou, Tsugihiko
2011-01-01
The Research Project on Technical Information Basis for Aging Management was initiated in FY2006 by the Nuclear and Industrial Safety Agency (NISA) of the Ministry of Economy, Trade and Industry (METI) as a five-year program effectively, to promote aging management of domestic nuclear power plants. Its main objective was to improve the technical basis on which aging nuclear power plants are regulated. Upon taking part in the technical strategy map for Aging Management and Safe Long Term Operation, the experiences and achievements of the participating organizations were taken into account and the following four topics were chosen. The regional characteristics of the Fukui and Kinki area where 15 nuclear power plants, mainly PWRs, and many nuclear related research institutes and universities are located, were also considered. 1) The improvement of pipe thinning management in nuclear power plants, 2) The development of inspection techniques to monitor the initiation and propagation of defects, 3) The development of a guideline for evaluating weld repair methods, 4) The development of a guideline for evaluating the degradation of main structures. To promote this research project, INSS has established a regional consortium (called the 'Fukui Regional Cluster' in coordination with universities, research institutes, electric utilities and venders in the Fukui and Kinki area. INSS is acting as a coordinator to make contracts, facilitate execution, and compile annual reports. In FY2010, 11 continuing research subjects were proposed for this project and all were accepted. Of these, 5 subjects were related to the first topic (pipe thinning), 4 subjects to the second topic (inspection technique) and 1 subject to each of the other two topics (weld repair and main structures). All the subjects have been completed, fulfilling the requirements and expectations. (author)
International Nuclear Information System (INIS)
Kim, J.I.; Artinger, R.; Buckau, G.; Kardinal, C.; Geyer, S.; Wolf, M.; Halder, H.; Fritz, P.
1995-05-01
The groundwater dating on the basis of the 14 C content of dissolved organic carbon (DOC) is studied. Fulvic acids (FA) and humic acids (HA) are used as DOC fractions. In addition, the groundwaters are dated with the 14 C content of the dissolved inorganic carbon (DIC). The isotopic contents of 2 H, 3 H, 13 C, 15 N, 18 O, and 34 S of groundwater and humic substances are alse determined. The isolated humic substances are characterized with regard to their chemical composition as well as their molecular size and spectroscopic properties. For aquifer systems which have a neglectable content of sedimentary organic carbon (SOC), the 14 C dating of FA show plausible groundwater ages. In aquifer systems with a high SOC content, the mixing of 14 C free FA from sediment partly falsifies the 14 C groundwater age as determined by dissolved FA. Due to the high transfer of HA from sediment to groundwater, HA are less suitable for groundwater dating. The FA characterization allows the distinction between FA of sedimentary origin and FA which infiltrate with seepage water. Several starting points for a correction of the calculated 14 C ages of FA exist. The results indicate, 14 C groundwater dating with fulvic acids is a valuable expansion of groundwater dating methods. (orig.) [de
International Nuclear Information System (INIS)
2002-01-01
A link has been established between increasing levels of greenhouse gases in the atmosphere and the rise in global temperatures. The burning of fossil fuels, land use changes, agricultural and industrial activities play a large part in the increase of greenhouse gases and result in in changes to temperature, precipitation and weather patterns. The two methods that can be used to reduce the buildup of greenhouse gases in the atmosphere are the reduction of the gases and the sequestration of carbon dioxide (carbon dioxide is absorbed) into terrestrial processes. Several policy options are being considered to effect this reduction in buildup, and one of those includes the implementation of a tradable system of emission permits. Such a scenario would involve the agricultural sector removing and reducing on-farm emissions of greenhouse gases, thereby earning it credits that could then be sold to those industries that face tougher greenhouse gases control costs. The study led to several findings: (1) trades in carbon dioxide in the Albertan agricultural sector and changes in agricultural practices could lead to reductions of up to 5 million tonnes per year to 2008, (2) the sector is in a good position to trade carbon removals and credits into a large final emitter cap and trade system, (3) some uncertainties in the policy area remain, (4) the early years of trading are not risk-free, and (5) the risks are being hedged through a number of mechanisms and tools that have already been identified. 18 refs., 3 tabs., 3 figs
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
1999-09-27
Medicare policy provides that payroll taxes that a provider becomes obligated to remit to governmental agencies are included in allowable costs only in the cost reporting period in which payment (upon which the payroll taxes are based) is actually made to an employee. Therefore, for payroll accrued in 1 year but not paid until the next year, the associated payroll taxes are not an allowable cost until the next year. This final rule provides for an exception when payment would be made to the employee in the current year but for the fact that regularly scheduled payment date is after the end of the year. In that case, the rule requires allowance in the current year of accrued taxes on payroll that is accrued through the end of the year but not paid until the beginning of the next year, thus allowing accrued taxes on end-of-the year payroll in the same year that the accrual of the payroll itself is allowed. The effect of this rule is not on the allowability of cost but rather only on the timing of payment; that is, the cost of payroll taxes on end-of-the-year payroll is allowable in the current period rather than in the following period.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
International Nuclear Information System (INIS)
Reddy, G.M.
1986-01-01
The stability of high productivity of modern rice varieties is greatly affected by insect pests. Rice gall midge is a serious insect pest of rice that is prevalent in several south eastern asian countries. Gall midge resistance has been mainly attributed to antibiosis. No progress has so far been made in identifying the exact biochemical nature of resistance. In Indica subspecies the understanding of chemical nature of disease would be helpful in the control of the disease and also in breeding programme aimed at developing resistance varieties. Studies were undertaken to establish the biochemical basis of resistance. Biochemical characterization of resistant and susceptible varieties were carried out. The parameters considered were: total sugar and reducing sugar content, total phenol content, amino acid profile, post infectional changes in sugar and phenol, isozyme studies. 2 figs, 6 tabs
Energy Technology Data Exchange (ETDEWEB)
Reddy, G M [Osmania Univ., Hyderabad (India). Dept. of Genetics
1987-12-31
The stability of high productivity of modern rice varieties is greatly affected by insect pests. Rice gall midge is a serious insect pest of rice that is prevalent in several south eastern asian countries. Gall midge resistance has been mainly attributed to antibiosis. No progress has so far been made in identifying the exact biochemical nature of resistance. In Indica subspecies the understanding of chemical nature of disease would be helpful in the control of the disease and also in breeding programme aimed at developing resistance varieties. Studies were undertaken to establish the biochemical basis of resistance. Biochemical characterization of resistant and susceptible varieties were carried out. The parameters considered were: total sugar and reducing sugar content, total phenol content, amino acid profile, post infectional changes in sugar and phenol, isozyme studies. 2 figs, 6 tabs.
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
Understanding Regular Expressions
Directory of Open Access Journals (Sweden)
Doug Knox
2013-06-01
Full Text Available In this exercise we will use advanced find-and-replace capabilities in a word processing application in order to make use of structure in a brief historical document that is essentially a table in the form of prose. Without using a general programming language, we will gain exposure to some aspects of computational thinking, especially pattern matching, that can be immediately helpful to working historians (and others using word processors, and can form the basis for subsequent learning with more general programming environments.
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
20 CFR 226.33 - Spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Infants use temporal regularities to chunk objects in memory.
Kibbe, Melissa M; Feigenson, Lisa
2016-01-01
Infants, like adults, can maintain only a few items in working memory, but can overcome this limit by creating more efficient representations, or "chunks." Previous research shows that infants can form chunks using shared features or spatial proximity between objects. Here we asked whether infants also can create chunked representations using regularities that unfold over time. Thirteen-month old infants first were familiarized with four objects of different shapes and colors, presented in successive pairs. For some infants, the identities of objects in each pair varied randomly across familiarization (Experiment 1). For others, the objects within a pair always co-occurred, either in consistent relative spatial positions (Experiment 2a) or varying spatial positions (Experiment 2b). Following familiarization, infants saw all four objects hidden behind a screen and then saw the screen lifted to reveal either four objects or only three. Infants in Experiment 1, who had been familiarized with random object pairings, failed to look longer at the unexpected 3-object outcome; they showed the same inability to concurrently represent four objects as in other studies of infant working memory. In contrast, infants in Experiments 2a and 2b, who had been familiarized with regularly co-occurring pairs, looked longer at the unexpected outcome. These infants apparently used the co-occurrence between individual objects during familiarization to form chunked representations that were later deployed to track the objects as they were hidden at test. In Experiment 3, we confirmed that the familiarization affected infants' ability to remember the occluded objects rather than merely establishing longer-term memory for object pairs. Following familiarization to consistent pairs, infants who were not shown a hiding event (but merely saw the same test outcomes as in Experiments 2a and b) showed no preference for arrays of three versus four objects. Finally, in Experiments 4 and 5, we asked
Energy Technology Data Exchange (ETDEWEB)
2011-02-15
SKB will submit applications for permits and admissibility under the Environmental Act and under the Nuclear Activities Act to construct and operate a disposal facility for spent nuclear fuel at Forsmark. In the final repository the spent nuclear fuel from Swedish nuclear power plants is placed in order to protect human health and the environment against harmful effects of ionizing radiation. Construction and operation of the disposal facility in Forsmark will make an impact, give effects and consequences for the natural environment. Utilization of land for the construction of the facility and the impact on ground water as a result of groundwater drainage is expected to have negative consequences for the species included in species protection regulation. Thus, the planned activity require exemption from species protection regulation (SFS 2007:845). The purpose of this document is to provide a basis for an application for exemption under 14 paragraph species protection regulation from the prohibitions of 4, 6, 7 and 8 paragraph species protection regulation. A basis for the exemption application is that the proposed activity is considered to have an 'overriding public interest' prescribed in 14 paragraph species protection regulation. The document reports the impact, effects and consequences of the planned activities on species covered in the species protection regulation. The impact on protected species can be divided into two categories: - Direct effects on protected species and their habitats by utilization of the land. - Indirect effects on protected species and their habitats in the drainage of groundwater and the effect on groundwater levels. The document also includes a description of planned actions to prevent, restrict and compensate for the effects and consequences that the activity may cause. By applying for an exemption under 14 paragraph species protection regulation in a separate order from the application for permit according to chapters 9
Energy Technology Data Exchange (ETDEWEB)
2011-02-15
SKB will submit applications for permits and admissibility under the Environmental Act and under the Nuclear Activities Act to construct and operate a disposal facility for spent nuclear fuel at Forsmark. In the final repository the spent nuclear fuel from Swedish nuclear power plants is placed in order to protect human health and the environment against harmful effects of ionizing radiation. Construction and operation of the disposal facility in Forsmark will make an impact, give effects and consequences for the natural environment. Utilization of land for the construction of the facility and the impact on ground water as a result of groundwater drainage is expected to have negative consequences for the species included in species protection regulation. Thus, the planned activity require exemption from species protection regulation (SFS 2007:845). The purpose of this document is to provide a basis for an application for exemption under 14 paragraph species protection regulation from the prohibitions of 4, 6, 7 and 8 paragraph species protection regulation. A basis for the exemption application is that the proposed activity is considered to have an 'overriding public interest' prescribed in 14 paragraph species protection regulation. The document reports the impact, effects and consequences of the planned activities on species covered in the species protection regulation. The impact on protected species can be divided into two categories: - Direct effects on protected species and their habitats by utilization of the land. - Indirect effects on protected species and their habitats in the drainage of groundwater and the effect on groundwater levels. The document also includes a description of planned actions to prevent, restrict and compensate for the effects and consequences that the activity may cause. By applying for an exemption under 14 paragraph species protection regulation in a separate order from the application for permit according to chapters 9 and 11
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-01-01
Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties.
International Nuclear Information System (INIS)
2002-01-01
Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
International Development Research Centre (IDRC) Digital Library (Canada)
test
2005-12-31
Dec 31, 2005 ... and level of comfort with, the issues and the various projects with which GEH ... and I do believe that there is no better way to become familiar with something ... Web maintenance and updating on a regular and on-going basis;.
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Learning regularization parameters for general-form Tikhonov
International Nuclear Information System (INIS)
Chung, Julianne; Español, Malena I
2017-01-01
Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Infants Learn Phonotactic Regularities from Brief Auditory Experience.
Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia
2003-01-01
Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-olds from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position. Findings in a subsequent…
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
International Nuclear Information System (INIS)
1993-09-01
This report forms part of the supporting documentation for the low- and intermediate-level waste repository site selection procedure. The aim of the report is to present the site-specific geological data, and the geosphere database derived therefrom, which were used as a basis for evaluating the long-term safety of a repository at Wellenberg. These data also form a key component of other reports appearing simultaneously with the present one, first on the intercomparison of the four potential sites, (NTB 93-02) and second, on the safety assessment of the Wellenberg site itself (NTB 93-26). The level of detail of the present report is determined by the requirements of the other two reports mentioned, which would include presenting, discussing and justifying the geosphere dataset used in the performance assessment model calculations. The introductory chapter discusses procedures and goals. The second chapter provides an overview of the geographical and geological situation at Wellenberg. Chapter 3 then discusses the planning and progress of the field programme, and the current status of investigations is presented. The fourth chapter presents the geological situation at the Wellenberg site and describes the concept and models formulated on the basis of this information. Chapter 5 derives the performance assessment and engineering datasets, based on the investigations, concepts and modelling exercises described in chapter 4. In summary, it can be said that, to date, the investigation results from Wellenberg have confirmed predictions in all relevant respects and, in some cases, have even exceeded expectations (e.g. in relation to the available volume of host rock). (author) figs., tabs., 141 refs
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
Energy Technology Data Exchange (ETDEWEB)
Boehner, J; Haselein, F; Hoffmann, H; Klinge, M; Lehmkuhl, F
2001-07-01
During the research project, the scientific base for the methodological coupling of GCM-Simulations and relief parametrisations for a spatially distributed downscaling scheme and for the detection of climatic controlled geomorphologic process regions was founded. The results of the application of the downscaling procedure and the detected climatic determinants of the recent geomorphologic process regions serve as the actualistic base for a proxy based climatic reconstruction as well as for the prognosis of potential future climatic impacts on the environment of Central and High Mountain Asia. For the Last Glacial Maximum (LGM), the spatial distribution of temperature and precipitation of Central- and High Mountain Asia was reconstructed and compared to the downscaling results of GCM-Paleo simulations (ECHAM). Due to the possibility of a direct parameterisation of GCM generated circulation variables and complex relief parameters for the regionalisation of climatic variables and geomorphologic process regions, the validation of ECHAM paleo simulations was also possible by comparing the proxy based reconstruction of the late quaternary environment to the modelled environment as derived from the application of ECHAM LGM simulations. For the assessment of potential future climatic impacts on the natural environment, alternative SRES emission scenarios are taken into account to detect the range of possible future changes in the distribution of Central Asia mountain belts and climatic controlled geomorphologic process regions. (orig.) [German] Im Rahmen des Forschungsprojektes wurden die Grundlagen zur methodisch-konzeptionellen Koppelung von GCM-Simulationen mit Reliefparametrisierungen zur raeumlich hochaufloesenden Klimaregionalisierung sowie zur Erfassung und quantitativen Eingrenzung klimatisch determinierter Prozessregionen geschaffen, die die aktualistische Basis fuer Klimarekonstruktionen auf Basis von Proxies aber auch die Grundlage fuer geomorphologisch
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
(2+1-dimensional regular black holes with nonlinear electrodynamics sources
Directory of Open Access Journals (Sweden)
Yun He
2017-11-01
Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.
The relationship between synchronization and percolation for regular networks
Li, Zhe; Ren, Tao; Xu, Yanjie; Jin, Jianyu
2018-02-01
Synchronization and percolation are two essential phenomena in complex dynamical networks. They have been studied widely, but previously treated as unrelated. In this paper, the relationship between synchronization and percolation are revealed for regular networks. Firstly, we discovered a bridge between synchronization and percolation by using the eigenvalues of the Laplacian matrix to describe the synchronizability and using the eigenvalues of the adjacency matrix to describe the percolation threshold. Then, we proposed a method to find the relationship for regular networks based on the topology of networks. Particularly, if the degree distribution of the network is subject to delta function, we show that only the eigenvalues of the adjacency matrix need to be calculated. Finally, several examples are provided to demonstrate how to apply our proposed method to discover the relationship between synchronization and percolation for regular networks.
The Matlab Radial Basis Function Toolbox
Directory of Open Access Journals (Sweden)
Scott A. Sarra
2017-03-01
Full Text Available Radial Basis Function (RBF methods are important tools for scattered data interpolation and for the solution of Partial Differential Equations in complexly shaped domains. The most straight forward approach used to evaluate the methods involves solving a linear system which is typically poorly conditioned. The Matlab Radial Basis Function toolbox features a regularization method for the ill-conditioned system, extended precision floating point arithmetic, and symmetry exploitation for the purpose of reducing flop counts of the associated numerical linear algebra algorithms.
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
International Nuclear Information System (INIS)
Kalyanam, K.M.
1996-06-01
This document represents the final report for the global tritium source term analysis task initiated in 1995. The report presents a room-by-room map/table at the subsystem level for the ITER tritium systems, identifying the major equipment, secondary containments, tritium release sources, duration/frequency of tritium releases and the release pathways. The chronic tritium releases during normal operation, as well as tritium releases due to routine maintenance of the Water Distillation Unit, Isotope Separation System and Primary and Secondary Heat Transport Systems, have been estimated for most of the subsystems, based on the IDR design, the Design Description Documents (April - Jun 1995 issues) and the design updates up to December 1995. The report also outlines the methodology and the key assumptions that are adopted in preparing the tritium release estimates. The design parameters for the ITER Basic Performance Phase (BPP) have been used in estimating the tritium releases shown in the room-by-room map/table. The tritium release calculations and the room-by-room map/table have been prepared in EXCEL, so that the estimates can be refined easily as the design evolves and more detailed information becomes available. (author). 23 refs., tabs
International Nuclear Information System (INIS)
2011-05-01
In 2003, the Danish Parliament in resolution No. B 48 on the dismantling of the nuclear facilities at Risoe gave consent to the government to begin preparation of a decision basis for a Danish final repository for low and intermediate level waste. As a result, a working group under the Ministry of Health and Prevention in 2008 prepared the report 'Decision basis for a Danish final repository for low and medium level radioactive waste'. In this report it was recommended to prepare three parallel preliminary studies: one about the repository concepts with the aim to obtain the necessary decision-making basis for selecting which concepts to analyze within the process of establishing a final repository, one on transportation of radioactive waste to the depot and one about regional mapping with the aim to characterize areas as suitable or unsuitable for locating a repository. The present report contains the main conclusions of each of the three parallel studies in relation to the further localization process. The preliminary studies suggest 22 areas, of which it is recommended to proceed with six in the selection process. The preliminary studies also show that all investigated storage concepts will be possible solutions from a security standpoint. However, there will be greater risks associated with depots near the surface, because they are more subjected to intentional or accidental intrusion. Overall, a medium deep repository will be the most appropriate solution, but it is also a more expensive solution than the near-surface repository. Both subsurface and the deep repositories may be reversible, but it is estimated to increase overall costs and may increase risk related to accidents. The preliminary studies establishes a set of conclusions and recommendations concerning future studies related to repository concepts and safety analyses, including in relation to the specific geology at the selected locations. The transportation studies show that radio
Reduction of Nambu-Poisson Manifolds by Regular Distributions
Das, Apurba
2018-03-01
The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.
Probing community nurses' professional basis
DEFF Research Database (Denmark)
Schaarup, Clara; Pape-Haugaard, Louise; Jensen, Merete Hartun
2017-01-01
Complicated and long-lasting wound care of diabetic foot ulcers are moving from specialists in wound care at hospitals towards community nurses without specialist diabetic foot ulcer wound care knowledge. The aim of the study is to elucidate community nurses' professional basis for treating...... diabetic foot ulcers. A situational case study design was adopted in an archetypical Danish community nursing setting. Experience is a crucial component in the community nurses' professional basis for treating diabetic foot ulcers. Peer-to-peer training is the prevailing way to learn about diabetic foot...... ulcer, however, this contributes to the risk of low evidence-based practice. Finally, a frequent behaviour among the community nurses is to consult colleagues before treating the diabetic foot ulcers....
Sparsity-regularized HMAX for visual recognition.
Directory of Open Access Journals (Sweden)
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Energy Technology Data Exchange (ETDEWEB)
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)
2011-11-15
rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
International Nuclear Information System (INIS)
R.J. Garrett
2002-01-01
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities
Energy Technology Data Exchange (ETDEWEB)
R.J. Garrett
2002-01-14
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
A regularization method for extrapolation of solar potential magnetic fields
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
Geosocial process and its regularities
Vikulina, Marina; Vikulin, Alexander; Dolgaya, Anna
2015-04-01
Natural disasters and social events (wars, revolutions, genocides, epidemics, fires, etc.) accompany each other throughout human civilization, thus reflecting the close relationship of these phenomena that are seemingly of different nature. In order to study this relationship authors compiled and analyzed the list of the 2,400 natural disasters and social phenomena weighted by their magnitude that occurred during the last XXXVI centuries of our history. Statistical analysis was performed separately for each aggregate (natural disasters and social phenomena), and for particular statistically representative types of events. There was 5 + 5 = 10 types. It is shown that the numbers of events in the list are distributed by logarithmic law: the bigger the event, the less likely it happens. For each type of events and each aggregate the existence of periodicities with periods of 280 ± 60 years was established. Statistical analysis of the time intervals between adjacent events for both aggregates showed good agreement with Weibull-Gnedenko distribution with shape parameter less than 1, which is equivalent to the conclusion about the grouping of events at small time intervals. Modeling of statistics of time intervals with Pareto distribution allowed to identify the emergent property for all events in the aggregate. This result allowed the authors to make conclusion about interaction between natural disasters and social phenomena. The list of events compiled by authors and first identified properties of cyclicity, grouping and interaction process reflected by this list is the basis of modeling essentially unified geosocial process at high enough statistical level. Proof of interaction between "lifeless" Nature and Society is fundamental and provided a new approach to forecasting demographic crises with taking into account both natural disasters and social phenomena.
Energy Technology Data Exchange (ETDEWEB)
Schroeder, B.
2002-01-22
- and p-type emitters have been fabricated. After a very short development time conversion efficiencies have been obtained ({eta}{sub max} = 15.2%) which are reported for PECVD emitters. (orig.) [German] Zwei neue Anlagen zur vollstaendigen bzw. grossflaechigen Abscheidung von a-Si:H basierenden Solarzellen mit der sog. 'Hot-Wire (HW)' CVD wurden aufgebaut. Die Abscheidebedingungen fuer geeignete n- und p-dotierte a-Si:H- bzw. {mu}c-Si:H-Schichten wurden ermittelt. Weltweit wurde erstmals eine a-Si:H-pin-Zelle vollstaendig mit der HWCVD-Methode hergestellt, ein Anfangswirkungsgrad von {eta}{sub initial} = 8,9% wurde erreicht. Nach Entwicklung eines p/n-Tunnel- bzw. Rekombinationsueberganges ist es weltweit ebenfalls erstmals gelungen, pin-pin-Tandemstrukturen mit a-Si:H-Absorbern vollstaendig mit der HWCVD-Methode abzuscheiden. Nach Teilalterung wurden noch Wirkungsgrade von {eta}{approx}7% ermittelt. Generell ist die Stabilitaet der all-HWCVD-Zellen noch unbedriedigend, was auf strukturell instabile p-Schichten zurueckgefuehrt werden konnte. Erste nip-Solarzellen auf Edelstahlsubstraten wurden ebenfalls vollstaendig mit der HWCVD praepariert ({eta}{sub initial}>6%). Der Einbau von {mu}c-Si:H-Absorberschichten die mit HWCVD bzw. ECWR-PECVD hergestellt wurden, in pin-Solarzellen war bisher wenig erfolgreich. In einer Anlage zur grossflaechigen HWCVD-Abscheidung wurden a-Si:H-Schichten mit guter Qualitaet und einer Schichtdickenuniformitaet von {delta}d = {+-}2,5% hergestellt. Fuer sog. 'Huepfzellen', nur die i-Schicht wurde in der Anlage abgeschieden, wurden auch sehr uniforme Anfangswirkungsgrade {eta}{sub initial}=6,1{+-}0,2% fuer kleinflaechige Zellen auf einer Flaeche von 20 x 20 cm{sup 2} erreicht. Diese Ergebnisse koennen als 'proof of concept' fuer die grossflaechige HWCVD-Abscheidung fuer a-Si:H-basierende Solarzellen betrachtet werden. Erstmals wurde die HWCVD zur Abscheidung von Emitter-Schichten fuer Hetero-Solarzellen auf c-Si-Wafer-Basis
BWR NSSS design basis documentation
International Nuclear Information System (INIS)
Vij, R.S.; Bates, R.E.
2004-01-01
programs that GE has participated in and describes the different options and approaches that have been used by various utilities in their design basis programs. Some of these variations deal with the scope and depth of coverage of the information, while others are related to the process (how the work is done). Both of these topics can have a significant effect on the program cost. Some insight into these effects is provided. The final section of the paper presents a set of lessons learned and a recommendation for an optimum approach to a design basis information program. The lessons learned reflect the knowledge that GE has gained by participating in design basis programs with nineteen domestic and international BWR owner/operators. The optimum approach described in this paper is GE's attempt to define a set of information and a work process for a utility/GE NSSS Design Basis Information program that will maximize the cost effectiveness of the program for the utility. (author)
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
Regularized semiclassical limits: Linear flows with infinite Lyapunov exponents
Athanassoulis, Agissilaos; Katsaounis, Theodoros; Kyza, Irene
2016-01-01
Semiclassical asymptotics for Schrödinger equations with non-smooth potentials give rise to ill-posed formal semiclassical limits. These problems have attracted a lot of attention in the last few years, as a proxy for the treatment of eigenvalue crossings, i.e. general systems. It has recently been shown that the semiclassical limit for conical singularities is in fact well-posed, as long as the Wigner measure (WM) stays away from singular saddle points. In this work we develop a family of refined semiclassical estimates, and use them to derive regularized transport equations for saddle points with infinite Lyapunov exponents, extending the aforementioned recent results. In the process we answer a related question posed by P.L. Lions and T. Paul in 1993. If we consider more singular potentials, our rigorous estimates break down. To investigate whether conical saddle points, such as -|x|, admit a regularized transport asymptotic approximation, we employ a numerical solver based on posteriori error control. Thus rigorous upper bounds for the asymptotic error in concrete problems are generated. In particular, specific phenomena which render invalid any regularized transport for -|x| are identified and quantified. In that sense our rigorous results are sharp. Finally, we use our findings to formulate a precise conjecture for the condition under which conical saddle points admit a regularized transport solution for the WM. © 2016 International Press.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Regularized semiclassical limits: Linear flows with infinite Lyapunov exponents
Athanassoulis, Agissilaos
2016-08-30
Semiclassical asymptotics for Schrödinger equations with non-smooth potentials give rise to ill-posed formal semiclassical limits. These problems have attracted a lot of attention in the last few years, as a proxy for the treatment of eigenvalue crossings, i.e. general systems. It has recently been shown that the semiclassical limit for conical singularities is in fact well-posed, as long as the Wigner measure (WM) stays away from singular saddle points. In this work we develop a family of refined semiclassical estimates, and use them to derive regularized transport equations for saddle points with infinite Lyapunov exponents, extending the aforementioned recent results. In the process we answer a related question posed by P.L. Lions and T. Paul in 1993. If we consider more singular potentials, our rigorous estimates break down. To investigate whether conical saddle points, such as -|x|, admit a regularized transport asymptotic approximation, we employ a numerical solver based on posteriori error control. Thus rigorous upper bounds for the asymptotic error in concrete problems are generated. In particular, specific phenomena which render invalid any regularized transport for -|x| are identified and quantified. In that sense our rigorous results are sharp. Finally, we use our findings to formulate a precise conjecture for the condition under which conical saddle points admit a regularized transport solution for the WM. © 2016 International Press.
Ethical aspects of final disposal. Final report
International Nuclear Information System (INIS)
Baltes, B.; Leder, W.; Achenbach, G.B.; Spaemann, R.; Gerhardt, V.
2003-01-01
In fulfilment of this task the Federal Environmental Ministry has commissioned GRS to summarise the current national and international status of ethical aspects of the final disposal of radioactive wastes as part of the project titled ''Final disposal of radioactive wastes as seen from the viewpoint of ethical objectives''. The questions arising from the opinions, positions and publications presented in the report by GRS were to serve as a basis for an expert discussion or an interdisciplinary discussion forum for all concerned with the ethical aspects of an answerable approach to the final disposal of radioactive wastes. In April 2001 GRS held a one-day seminar at which leading ethicists and philosophers offered statements on the questions referred to above and joined in a discussion with experts on issues of final disposal. This report documents the questions that arose ahead of the workshop, the specialist lectures held there and a summary of the discussion results [de
SENTINEL trademark technical basis report for Peach Bottom. Final report
International Nuclear Information System (INIS)
1998-04-01
PECO Energy in cooperation with the Electric Power Research Institute (EPRI) installed the SENTINEL trademark software at its Peach Bottom Atomic Power Station (PBAPS). This software incorporates models of the safety and support systems which are used to display the defense in depth present in the plant and a quantitative assessment of the plant risks during proposed on-line maintenance. During the past nine months, PECO Energy personnel have used this display to evaluate the safety of proposed on-line maintenance schedules. The report describes the motivation for and the development of the SENTINEL software. It describes the generation of Safety Function Assessment Trees and Plant Transient Assessment Trees and their use in evaluating the level of defense-in-depth of key plant safety functions and the susceptibility of the plant to critical transient events. Their results are displayed by color indicators ranging from green, through yellow and orange, to red to show increasingly hazardous conditions. The report describes the use of the PBAPS Probabilistic Safety Assessment within the SENTINEL code to calculate an instantaneous core damage frequency and the criteria by which this frequency is translated to a color indicator
The Social Basis of Math Teaching and Learning. Final Report.
Orvik, James M.; Van Veldhuizen, Philip A.
This study was designed to identify a set of research questions and testable hypothesis to aid in planning long-range research. Five mathematics teachers were selected. These instructors enrolled in a special project-related seminar, video-taped sessions of their own mathematics classes, and kept field journals. The group met once a week to…
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.
2018-03-01
In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Theoretical basis for dosimetry
International Nuclear Information System (INIS)
Carlsson, G.A.
1985-01-01
Radiation dosimetry is fundamental to all fields of science dealing with radiation effects and is concerned with problems which are often intricate as hinted above. A firm scientific basis is needed to face increasing demands on accurate dosimetry. This chapter is an attempt to review and to elucidate the elements for such a basis. Quantities suitable for radiation dosimetry have been defined in the unique work to coordinate radiation terminology and usage by the International Commission on Radiation Units and Measurements, ICRU. Basic definitions and terminology used in this chapter conform with the recent ''Radiation Quantities and Units, Report 33'' of the ICRU
Object Tracking via 2DPCA and l2-Regularization
Directory of Open Access Journals (Sweden)
Haijun Wang
2016-01-01
Full Text Available We present a fast and robust object tracking algorithm by using 2DPCA and l2-regularization in a Bayesian inference framework. Firstly, we model the challenging appearance of the tracked object using 2DPCA bases, which exploit the strength of subspace representation. Secondly, we adopt the l2-regularization to solve the proposed presentation model and remove the trivial templates from the sparse tracking method which can provide a more fast tracking performance. Finally, we present a novel likelihood function that considers the reconstruction error, which is concluded from the orthogonal left-projection matrix and the orthogonal right-projection matrix. Experimental results on several challenging image sequences demonstrate that the proposed method can achieve more favorable performance against state-of-the-art tracking algorithms.
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
Iterative regularization in intensity-modulated radiation therapy optimization
International Nuclear Information System (INIS)
Carlsson, Fredrik; Forsgren, Anders
2006-01-01
A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan
DEFF Research Database (Denmark)
Tsapatsaris, Nikolaos; Willendrup, Peter Kjær; E. Lechner, Ruep
2015-01-01
Results based on virtual instrument models for the first high-flux, high-resolution, spallation based, backscattering spectrometer, BASIS are presented in this paper. These were verified using the Monte Carlo instrument simulation packages McStas and VITESS. Excellent agreement of the neutron count...... are pivotal to the conceptual design of the next generation backscattering spectrometer, MIRACLES at the European Spallation Source....
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
International Nuclear Information System (INIS)
Blanco, M.; Heller, E.J.
1985-01-01
A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable
Energy Technology Data Exchange (ETDEWEB)
Larsen, G.; Soerensen, P. [Risoe National Lab., Roskilde (Denmark)
1996-09-01
Design Basis Program 2 (DBP2) is comprehensive fully coupled code which has the capability to operate in the time domain as well as in the frequency domain. The code was developed during the period 1991-93 and succeed Design Basis 1, which is a one-blade model presuming stiff tower, transmission system and hub. The package is designed for use on a personal computer and offers a user-friendly environment based on menu-driven editing and control facilities, and with graphics used extensively for the data presentation. Moreover in-data as well as results are dumped on files in Ascii-format. The input data is organized in a in-data base with a structure that easily allows for arbitrary combinations of defined structural components and load cases. (au)
Directory of Open Access Journals (Sweden)
Wei Gao
2016-01-01
Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.
Directory of Open Access Journals (Sweden)
Jos C. M. Baeten
2010-11-01
Full Text Available The languages accepted by finite automata are precisely the languages denoted by regular expressions. In contrast, finite automata may exhibit behaviours that cannot be described by regular expressions up to bisimilarity. In this paper, we consider extensions of the theory of regular expressions with various forms of parallel composition and study the effect on expressiveness. First we prove that adding pure interleaving to the theory of regular expressions strictly increases its expressiveness up to bisimilarity. Then, we prove that replacing the operation for pure interleaving by ACP-style parallel composition gives a further increase in expressiveness. Finally, we prove that the theory of regular expressions with ACP-style parallel composition and encapsulation is expressive enough to express all finite automata up to bisimilarity. Our results extend the expressiveness results obtained by Bergstra, Bethke and Ponse for process algebras with (the binary variant of Kleene's star operation.
Approximate Noether symmetries and collineations for regular perturbative Lagrangians
Paliathanasis, Andronikos; Jamal, Sameerah
2018-01-01
Regular perturbative Lagrangians that admit approximate Noether symmetries and approximate conservation laws are studied. Specifically, we investigate the connection between approximate Noether symmetries and collineations of the underlying manifold. In particular we determine the generic Noether symmetry conditions for the approximate point symmetries and we find that for a class of perturbed Lagrangians, Noether symmetries are related to the elements of the Homothetic algebra of the metric which is defined by the unperturbed Lagrangian. Moreover, we discuss how exact symmetries become approximate symmetries. Finally, some applications are presented.
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Manifold regularized multitask feature learning for multimodality disease classification.
Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang
2015-02-01
Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. © 2014 Wiley Periodicals, Inc.
Graph Regularized Auto-Encoders for Image Representation.
Yiyi Liao; Yue Wang; Yong Liu
2017-06-01
Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.
Enhanced manifold regularization for semi-supervised classification.
Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong
2016-06-01
Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.
Ensemble Kalman filter regularization using leave-one-out data cross-validation
Rayo Schiappacasse, Lautaro Jerónimo
2012-09-19
In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.
Regularizations of two-fold bifurcations in planar piecewise smooth systems using blowup
DEFF Research Database (Denmark)
Kristiansen, Kristian Uldall; Hogan, S. J.
2015-01-01
type of limit cycle that does not appear to be present in the original PWS system. For both types of limit cycle, we show that the criticality of the Hopf bifurcation that gives rise to periodic orbits is strongly dependent on the precise form of the regularization. Finally, we analyse the limit cycles...... as locally unique families of periodic orbits of the regularization and connect them, when possible, to limit cycles of the PWS system. We illustrate our analysis with numerical simulations and show how the regularized system can undergo a canard explosion phenomenon...
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
Radioactive Waste Management Basis
International Nuclear Information System (INIS)
Perkins, B.K.
2009-01-01
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Canister storage building design basis accident analysis documentation
International Nuclear Information System (INIS)
KOPELIC, S.D.
1999-01-01
This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report
Canister storage building design basis accident analysis documentation
Energy Technology Data Exchange (ETDEWEB)
KOPELIC, S.D.
1999-02-25
This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
International Nuclear Information System (INIS)
CROWE, R.D.
1999-01-01
This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
International Nuclear Information System (INIS)
CROWE, R.D.; PIEPHO, M.G.
2000-01-01
This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
Regularization of absorber or doorway states in heavy-particle collisions
International Nuclear Information System (INIS)
Errea, L.F.; Riera, A.; Sanchez, P.
1994-01-01
We present a unified theoretical basis of the recently proposed regularization method of absorber or doorway states. The theory is applicable to the close-coupling solutions of time-dependent Schroedinger equations corresponding to Hamiltonians containing singular terms and with a partial continuum spectrum. The presentation and illustration are restricted to the treatment of atomic collisions. (author)
Mehrdad, GOSHTASBPOUR; Center for Theoretical Physics and Mathematics, AEOI:Department of Physics, Shahid Beheshti University
1991-01-01
Extended D^†+D-DD^† Fujikawa regularization of anomaly and a method of integration of fermions for the chiral Schwinger model are criticized. On the basis of the corrected integration method, a new extended version of D^2 is obtained, resulting in the Jackiw-Rajaraman effective action.
Brink-Muinen, A. van den; Bensing, J.M.; Kerssens, J.J.
1998-01-01
Objectives: differences were investigated between general practitioners providing women's health care (4 women) and general practitioners providing regular health care (8 women and 8 men). Expectations were formulated on the basis of the principles of women's health care and literature about gender
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
On the analysis of glycomics mass spectrometry data via the regularized area under the ROC curve
Directory of Open Access Journals (Sweden)
Lebrilla Carlito B
2007-12-01
. The simulation proved asymptotic properties that estimated AUC approaches the true AUC. Finally, mass spectrometry data of serum glycan for ovarian cancer diagnosis was analyzed. The optimal combination based on TGDR-AUC algorithm yields plausible result and the detected biomarkers are confirmed based on biological evidence. Conclusion The TGDR-AUC algorithm relaxes the normality and independence assumptions from previous literatures. In addition to its flexibility and easy interpretability, the algorithm yields good performance in combining potential biomarkers and is computationally feasible. Thus, the approach of TGDR-AUC is a plausible algorithm to classify disease status on the basis of multiple biomarkers.
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm
Directory of Open Access Journals (Sweden)
O. G. Monahov
2014-01-01
Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously
Quantification of fetal heart rate regularity using symbolic dynamics
van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.
2007-03-01
Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.
Regular paths in SparQL: querying the NCI Thesaurus.
Detwiler, Landon T; Suciu, Dan; Brinkley, James F
2008-11-06
OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.
q-Space Upsampling Using x-q Space Regularization.
Chen, Geng; Dong, Bin; Zhang, Yong; Shen, Dinggang; Yap, Pew-Thian
2017-09-01
Acquisition time in diffusion MRI increases with the number of diffusion-weighted images that need to be acquired. Particularly in clinical settings, scan time is limited and only a sparse coverage of the vast q -space is possible. In this paper, we show how non-local self-similar information in the x - q space of diffusion MRI data can be harnessed for q -space upsampling. More specifically, we establish the relationships between signal measurements in x - q space using a patch matching mechanism that caters to unstructured data. We then encode these relationships in a graph and use it to regularize an inverse problem associated with recovering a high q -space resolution dataset from its low-resolution counterpart. Experimental results indicate that the high-resolution datasets reconstructed using the proposed method exhibit greater quality, both quantitatively and qualitatively, than those obtained using conventional methods, such as interpolation using spherical radial basis functions (SRBFs).
Entanglement in coined quantum walks on regular graphs
International Nuclear Information System (INIS)
Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L
2005-01-01
Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial
Energy Technology Data Exchange (ETDEWEB)
Gurney, Kevin R. [Arizona Univ., Mesa, AZ (United States)
2015-01-12
This document constitutes the final report under DOE grant DE-FG-08ER64649. The organization of this document is as follows: first, I will review the original scope of the proposed research. Second, I will present the current draft of a paper nearing submission to Nature Climate Change on the initial results of this funded effort. Finally, I will present the last phase of the research under this grant which has supported a Ph.D. student. To that end, I will present the graduate student’s proposed research, a portion of which is completed and reflected in the paper nearing submission. This final work phase will be completed in the next 12 months. This final workphase will likely result in 1-2 additional publications and we consider the results (as exemplified by the current paper) high quality. The continuing results will acknowledge the funding provided by DOE grant DE-FG-08ER64649.
Energy Technology Data Exchange (ETDEWEB)
DeTar, Carleton [P.I.
2012-12-10
This document constitutes the Final Report for award DE-FC02-06ER41446 as required by the Office of Science. It summarizes accomplishments and provides copies of scientific publications with significant contribution from this award.
International Nuclear Information System (INIS)
Verhaeghe, Jeroen; D'Asseler, Yves; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace
2007-01-01
The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Basis Document for Sludge Stabilization
Risenmay, H R
2001-01-01
DOE-RL recently issued Safety Evaluation Report (SER) amendments to the PFP Final Safety Analysis Report, HNF-SD-CP-SAR-021 Rev. 2. The Justification for Continued Operations for 2736-ZB and plutonium oxides in BTCs Safety Basis change (letter DOE-RL ABD-074) was approved by one of the SERs. Also approved by SER was the revised accident analysis for Magnesium Hydroxide Precipitation Process (MHPP) gloveboxes HC-230C-3 and HC-230C-5 containing increased glovebox inventories and corresponding increases in seismic release consequence. Numerous implementing documents require revision and issuance to implement the SER approvals. The SER plutonium oxides into BTCs specifically limited the SER scope to ''pure or clean oxides, i.e., 85 wt% or grater Pu, in this feed change'' (SER Section 3.0 Base Information paragraph 4 [page 11]). Comprehensive USQ Evaluation PFP-2001-12 addressed the packaging of Pu alloy metals into BTCs, and the packaging of Pu alloy oxides (powders) into food pack cans and determined that the ac...
A Regularization SAA Scheme for a Stochastic Mathematical Program with Complementarity Constraints
Directory of Open Access Journals (Sweden)
Yu-xin Li
2014-01-01
Full Text Available To reflect uncertain data in practical problems, stochastic versions of the mathematical program with complementarity constraints (MPCC have drawn much attention in the recent literature. Our concern is the detailed analysis of convergence properties of a regularization sample average approximation (SAA method for solving a stochastic mathematical program with complementarity constraints (SMPCC. The analysis of this regularization method is carried out in three steps: First, the almost sure convergence of optimal solutions of the regularized SAA problem to that of the true problem is established by the notion of epiconvergence in variational analysis. Second, under MPCC-MFCQ, which is weaker than MPCC-LICQ, we show that any accumulation point of Karash-Kuhn-Tucker points of the regularized SAA problem is almost surely a kind of stationary point of SMPCC as the sample size tends to infinity. Finally, some numerical results are reported to show the efficiency of the method proposed.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Performance Measures for Public Participation Methods : Final Report
2018-01-01
Public engagement is an important part of transportation project development, but measuring its effectiveness is typically piecemealed. Performance measurementdescribed by the Urban Institute as the measurement on a regular basis of the results (o...
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
The biological basis of radiotherapy
International Nuclear Information System (INIS)
Steel, G.G.; Adams, G.E.; Horwich, A.
1989-01-01
The focus of this book is the biological basis of radiotherapy. The papers presented include: Temporal stages of radiation action:free radical processes; The molecular basis of radiosensitivity; and Radiation damage to early-reacting normal tissue
Directory of Open Access Journals (Sweden)
Armine Kotin Mortimer
1981-01-01
Full Text Available The cloturai device of narration as salvation represents the lack of finality in three novels. In De Beauvoir's Tous les hommes sont mortels an immortal character turns his story to account, but the novel makes a mockery of the historical sense by which men define themselves. In the closing pages of Butor's La Modification , the hero plans to write a book to save himself. Through the thrice-considered portrayal of the Paris-Rome relationship, the ending shows the reader how to bring about closure, but this collective critique written by readers will always be a future book. Simon's La Bataille de Pharsale , the most radical attempt to destroy finality, is an infinite text. No new text can be written. This extreme of perversion guarantees bliss (jouissance . If the ending of De Beauvoir's novel transfers the burden of non-final world onto a new victim, Butor's non-finality lies in the deferral to a future writing, while Simon's writer is stuck in a writing loop, in which writing has become its own end and hence can have no end. The deconstructive and tragic form of contemporary novels proclaims the loss of belief in a finality inherent in the written text, to the profit of writing itself.
DEVELOPMENT OF INNOVATION MANAGEMENT THEORY BASED ON SYSTEM-WIDE REGULARITIES
Directory of Open Access Journals (Sweden)
Violetta N. Volkova
2013-01-01
Full Text Available The problem of a comprehension of the innovation management theory and an ability of its development on basis of system theory is set up. The authors consider features of management of socio-economic systems as open, self-organising systems with active components and give a classification of the systems’ regularities illustrating these features. The need to take into account the regularities of emergent, hierarchical order, equifinality, Ashby’s law of requisite variety, historicity and self-organization is shown.
Regularities of structure formation on different stages of WC-Co hard alloys fabrication
Energy Technology Data Exchange (ETDEWEB)
Chernyavskij, K S
1987-03-01
Some regularities of structural transformations in powder products of the hard alloys fabrication have been formulated on the basis of results of the author works and other native and foreign reseachers. New data confirming the influene of technological prehistory of carbide powder on the mechanism of its particle grinding as well as the influence of the structural-energy state of WC powder on the course of the WC-Co alloy structure formation processes are given. Some possibilities for the application in practice of the regularities studied are considered.
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
Properties of regular polygons of coupled microring resonators.
Chremmos, Ioannis; Uzunoglu, Nikolaos
2007-11-01
The resonant properties of a closed and symmetric cyclic array of N coupled microring resonators (coupled-microring resonator regular N-gon) are for the first time determined analytically by applying the transfer matrix approach and Floquet theorem for periodic propagation in cylindrically symmetric structures. By solving the corresponding eigenvalue problem with the field amplitudes in the rings as eigenvectors, it is shown that, for even or odd N, this photonic molecule possesses 1 + N/2 or 1+N resonant frequencies, respectively. The condition for resonances is found to be identical to the familiar dispersion equation of the infinite coupled-microring resonator waveguide with a discrete wave vector. This result reveals the so far latent connection between the two optical structures and is based on the fact that, for a regular polygon, the field transfer matrix over two successive rings is independent of the polygon vertex angle. The properties of the resonant modes are discussed in detail using the illustration of Brillouin band diagrams. Finally, the practical application of a channel-dropping filter based on polygons with an even number of rings is also analyzed.
On Some General Regularities of Formation of the Planetary Systems
Directory of Open Access Journals (Sweden)
Belyakov A. V.
2014-01-01
Full Text Available J.Wheeler’s geometrodynamic concept has been used, in which space continuum is considered as a topologically non-unitary coherent surface admitting the existence of transitions of the input-output kind between distant regions of the space in an additional dimension. This model assumes the existence of closed structures (micro- and macro- contours formed due to the balance between main interactions: gravitational, electric, magnetic, and inertial forces. It is such macrocontours that have been demonstrated to form — independently of their material basis — the essential structure of objects at various levels of organization of matter. On the basis of this concept in this paper basic regularities acting during formation planetary systems have been obtained. The existence of two sharply different types of planetary systems has been determined. The dependencies linking the masses of the planets, the diameters of the planets, the orbital radii of the planet, and the mass of the central body have been deduced. The possibility of formation of Earth-like planets near brown dwarfs has been grounded. The minimum mass of the planet, which may arise in the planetary system, has been defined.
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Regularity theory for mean-field game systems
Gomes, Diogo A; Voskanyan, Vardan
2016-01-01
Beginning with a concise introduction to the theory of mean-field games (MFGs), this book presents the key elements of the regularity theory for MFGs. It then introduces a series of techniques for well-posedness in the context of mean-field problems, including stationary and time-dependent MFGs, subquadratic and superquadratic MFG formulations, and distinct classes of mean-field couplings. It also explores stationary and time-dependent MFGs through a series of a-priori estimates for solutions of the Hamilton-Jacobi and Fokker-Planck equation. It shows sophisticated a-priori systems derived using a range of analytical techniques, and builds on previous results to explain classical solutions. The final chapter discusses the potential applications, models and natural extensions of MFGs. As MFGs connect common problems in pure mathematics, engineering, economics and data management, this book is a valuable resource for researchers and graduate students in these fields.
Regularity Theory for Mean-Field Game Systems
Gomes, Diogo A.
2016-09-14
Beginning with a concise introduction to the theory of mean-field games (MFGs), this book presents the key elements of the regularity theory for MFGs. It then introduces a series of techniques for well-posedness in the context of mean-field problems, including stationary and time-dependent MFGs, subquadratic and superquadratic MFG formulations, and distinct classes of mean-field couplings. It also explores stationary and time-dependent MFGs through a series of a-priori estimates for solutions of the Hamilton-Jacobi and Fokker-Planck equation. It shows sophisticated a-priori systems derived using a range of analytical techniques, and builds on previous results to explain classical solutions. The final chapter discusses the potential applications, models and natural extensions of MFGs. As MFGs connect common problems in pure mathematics, engineering, economics and data management, this book is a valuable resource for researchers and graduate students in these fields.
Regularity Theory for Mean-Field Game Systems
Gomes, Diogo A.; Pimentel, Edgard A.; Voskanyan, Vardan K.
2016-01-01
Beginning with a concise introduction to the theory of mean-field games (MFGs), this book presents the key elements of the regularity theory for MFGs. It then introduces a series of techniques for well-posedness in the context of mean-field problems, including stationary and time-dependent MFGs, subquadratic and superquadratic MFG formulations, and distinct classes of mean-field couplings. It also explores stationary and time-dependent MFGs through a series of a-priori estimates for solutions of the Hamilton-Jacobi and Fokker-Planck equation. It shows sophisticated a-priori systems derived using a range of analytical techniques, and builds on previous results to explain classical solutions. The final chapter discusses the potential applications, models and natural extensions of MFGs. As MFGs connect common problems in pure mathematics, engineering, economics and data management, this book is a valuable resource for researchers and graduate students in these fields.
Harmonic R-matrices for scattering amplitudes and spectral regularization
Energy Technology Data Exchange (ETDEWEB)
Ferro, Livia; Plefka, Jan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Lukowski, Tomasz [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Univ. Berlin (Germany). IRIS Adlershof; Meneghelli, Carlo [Hamburg Univ. (Germany). Fachbereich 11 - Mathematik; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Staudacher, Matthias [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany)
2012-12-15
Planar N=4 super Yang-Mills appears to be integrable. While this allows to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R-matrix of the integrable N=4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R-matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of non-integrable four-dimensional field theories.
A Priori Regularity of Parabolic Partial Differential Equations
Berkemeier, Francisco
2018-05-13
In this thesis, we consider parabolic partial differential equations such as the heat equation, the Fokker-Planck equation, and the porous media equation. Our aim is to develop methods that provide a priori estimates for solutions with singular initial data. These estimates are obtained by understanding the time decay of norms of solutions. First, we derive regularity results for the heat equation by estimating the decay of Lebesgue norms. Then, we apply similar methods to the Fokker-Planck equation with suitable assumptions on the advection and diffusion. Finally, we conclude by extending our techniques to the porous media equation. The sharpness of our results is confirmed by examining known solutions of these equations. The main contribution of this thesis is the use of functional inequalities to express decay of norms as differential inequalities. These are then combined with ODE methods to deduce estimates for the norms of solutions and their derivatives.
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Nielsen, Peter V.
This final report for the Hybrid Ventilation Centre at Aalborg University describes the activities and research achievement in the project period from August 2001 to August 2006. The report summarises the work performed and the results achieved with reference to articles and reports published...
GRACE L1b inversion through a self-consistent modified radial basis function approach
Yang, Fan; Kusche, Juergen; Rietbroek, Roelof; Eicker, Annette
2016-04-01
Implementing a regional geopotential representation such as mascons or, more general, RBFs (radial basis functions) has been widely accepted as an efficient and flexible approach to recover the gravity field from GRACE (Gravity Recovery and Climate Experiment), especially at higher latitude region like Greenland. This is since RBFs allow for regionally specific regularizations over areas which have sufficient and dense GRACE observations. Although existing RBF solutions show a better resolution than classical spherical harmonic solutions, the applied regularizations cause spatial leakage which should be carefully dealt with. It has been shown that leakage is a main error source which leads to an evident underestimation of yearly trend of ice-melting over Greenland. Unlike some popular post-processing techniques to mitigate leakage signals, this study, for the first time, attempts to reduce the leakage directly in the GRACE L1b inversion by constructing an innovative modified (MRBF) basis in place of the standard RBFs to retrieve a more realistic temporal gravity signal along the coastline. Our point of departure is that the surface mass loading associated with standard RBF is smooth but disregards physical consistency between continental mass and passive ocean response. In this contribution, based on earlier work by Clarke et al.(2007), a physically self-consistent MRBF representation is constructed from standard RBFs, with the help of the sea level equation: for a given standard RBF basis, the corresponding MRBF basis is first obtained by keeping the surface load over the continent unchanged, but imposing global mass conservation and equilibrium response of the oceans. Then, the updated set of MRBFs as well as standard RBFs are individually employed as the basis function to determine the temporal gravity field from GRACE L1b data. In this way, in the MRBF GRACE solution, the passive (e.g. ice melting and land hydrology response) sea level is automatically
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Energy Technology Data Exchange (ETDEWEB)
2011-05-15
In 2003, the Danish Parliament in resolution No. B 48 on the dismantling of the nuclear facilities at Risoe gave consent to the government to begin preparation of a decision basis for a Danish final repository for low and intermediate level waste. As a result, a working group under the Ministry of Health and Prevention in 2008 prepared the report 'Decision basis for a Danish final repository for low and medium level radioactive waste'. In this report it was recommended to prepare three parallel preliminary studies: one about the repository concepts with the aim to obtain the necessary decision-making basis for selecting which concepts to analyze within the process of establishing a final repository, one on transportation of radioactive waste to the depot and one about regional mapping with the aim to characterize areas as suitable or unsuitable for locating a repository. The present report contains the main conclusions of each of the three parallel studies in relation to the further localization process. The preliminary studies suggest 22 areas, of which it is recommended to proceed with six in the selection process. The preliminary studies also show that all investigated storage concepts will be possible solutions from a security standpoint. However, there will be greater risks associated with depots near the surface, because they are more subjected to intentional or accidental intrusion. Overall, a medium deep repository will be the most appropriate solution, but it is also a more expensive solution than the near-surface repository. Both subsurface and the deep repositories may be reversible, but it is estimated to increase overall costs and may increase risk related to accidents. The preliminary studies establishes a set of conclusions and recommendations concerning future studies related to repository concepts and safety analyses, including in relation to the specific geology at the selected locations. The transportation studies show that radio
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
Energy Technology Data Exchange (ETDEWEB)
Hollingsworth, Jeff
2014-04-04
The goal of this project was to create a community tool infrastructure for program development tools targeting Petascale class machines and beyond. This includes performance analysis, debugging, and correctness tools, as well as tuning and optimization frameworks. The infrastructure provides a comprehensive and extensible set of individual tool building components. Within this project we developed the basis for this infrastructure as well as set of core modules that allow a comprehensive performance analysis at scale. Further, we developed a methodology and workflow that allows others to add or replace modules, to integrate parts into their own tools, or to customize existing solutions.
Cold Vacuum Drying (CVD) Facility Design Basis Accident Analysis Documentation
Energy Technology Data Exchange (ETDEWEB)
PIEPHO, M.G.
1999-10-20
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR.
Cold Vacuum Drying Facility Design Basis Accident Analysis Documentation
International Nuclear Information System (INIS)
PIEPHO, M.G.
1999-01-01
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR
Energy Technology Data Exchange (ETDEWEB)
Stinis, Panos [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-08-07
This is the final report for the work conducted at the University of Minnesota (during the period 12/01/12-09/18/14) by PI Panos Stinis as part of the "Collaboratory on Mathematics for Mesoscopic Modeling of Materials" (CM4). CM4 is a multi-institution DOE-funded project whose aim is to conduct basic and applied research in the emerging field of mesoscopic modeling of materials.
Manifold-splitting regularization, self-linking, twisting, writhing numbers of space-time ribbons
International Nuclear Information System (INIS)
Tze, C.H.
1988-01-01
The authors present an alternative formulation of Polyakov's regularization of Gauss' integral formula for a single closed Feynman path. A key element in his proof of the D = 3 fermi-bose transmutations induced by topological gauge fields, this regularization is linked here with the existence and properties of a nontrivial topological invariant for a closed space ribbon. This self-linking coefficient, an integer, is the sum of two differential characteristics of the ribbon, its twisting and writhing numbers. These invariants form the basis for a physical interpretation of our regularization. Their connection to Polyakov's spinorization is discussed. The authors further generalize their construction to the self-linking, twisting and writhing of higher dimensional d = eta(odd) submanifolds in D = (2eta + 1) space-time
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Regularities of changes of metal melting entropy
International Nuclear Information System (INIS)
Kats, S.A.; Chekhovskoj, V.Ya.
1980-01-01
Most trustworthy data on temperatures, heats and entropies of fusion of metals have been used as a basis to throw light on the laws governing variations of the entropy of metals fusion. The elaborated procedure is used to predict the entropies of the metals fusion whose thermodynamic properties under high temperatures have not yet been investigated
Accretion onto some well-known regular black holes
International Nuclear Information System (INIS)
Jawad, Abdul; Shahzad, M.U.
2016-01-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-03-15
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent.
Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo
2011-07-01
Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R(1600), FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
Energy Technology Data Exchange (ETDEWEB)
CROWE, R.D.
1999-09-09
This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Directory of Open Access Journals (Sweden)
Patrick W. Keeley
2014-10-01
Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.
Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution
Elizalde, Emilio
2012-07-01
A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.
Regularization methods for ill-posed problems in multiple Hilbert scales
International Nuclear Information System (INIS)
Mazzieri, Gisela L; Spies, Ruben D
2012-01-01
Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)
Directory of Open Access Journals (Sweden)
Fairouz Zouyed
2015-01-01
Full Text Available This paper discusses the inverse problem of determining an unknown source in a second order differential equation from measured final data. This problem is ill-posed; that is, the solution (if it exists does not depend continuously on the data. In order to solve the considered problem, an iterative method is proposed. Using this method a regularized solution is constructed and an a priori error estimate between the exact solution and its regularized approximation is obtained. Moreover, numerical results are presented to illustrate the accuracy and efficiency of this method.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Cui, Yujun; Li, Yanjun; Yan, Yanfeng; Yang, Ruifu
2008-11-01
CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), the basis of spoligotyping technology, can provide prokaryotes with heritable adaptive immunity against phages' invasion. Studies on CRISPR loci and their associated elements, including various CAS (CRISPR-associated) proteins and leader sequences, are still in its infant period. We introduce the brief history', structure, function, bioinformatics research and application of this amazing immunity system in prokaryotic organism for inspiring more scientists to find their interest in this developing topic.
Regular Functions with Values in Ternary Number System on the Complex Clifford Analysis
Directory of Open Access Journals (Sweden)
Ji Eun Kim
2013-01-01
Full Text Available We define a new modified basis i^ which is an association of two bases, e1 and e2. We give an expression of the form z=x0+ i ^z0-, where x0 is a real number and z0- is a complex number on three-dimensional real skew field. And we research the properties of regular functions with values in ternary field and reduced quaternions by Clifford analysis.
REFORMASI SISTEM AKUNTANSI CASH BASIS MENUJU SISTEM AKUNTANSI ACCRUAL BASIS
Directory of Open Access Journals (Sweden)
Yuri Rahayu
2016-03-01
Full Text Available Abstract – Accounting reform movement was born with the aim of structuring the direction of improvement . This movement is characterized by the enactment of the Act of 2003 and Act 1 of 2004, which became the basis of the birth of Government Regulation No.24 of 2005 on Government Accounting Standards ( SAP . The general, accounting is based on two systems, the cash basis and the accrual basis. The facts speak far students still at problem with differences to the two methods that result in a lack of understanding on the treatment system for recording. The purpose method of research is particularly relevant to student references who are learning basic accounting so that it can provide information and more meaningful understanding of the accounting method cash basis and Accrual basis. This research was conducted through a normative approach, by tracing the document that references a study/library that combines source of reference that can be believed either from books and the internet are processed with a foundation of knowledge and experience of the author. The conclusion can be drawn that basically to be able to understand the difference of the system and the Cash Basis accrual student base treatment requires an understanding of both methods. To be able to have the ability and understanding of both systems required reading exercises and reference sources. Keywords : Reform, cash basis, accrual basis Abstrak - Gerakan reformasi akuntansi dilahirkan dengan tujuan penataan ke arah perbaikan. Gerakan ini ditandai dengan dikeluarkannya Undang-Undang tahun 2003 dan Undang-Undang No.1 Tahun 2004 yang menjadi dasar lahirnya Peraturan Pemerintah No.24 Tahun 2005 tentang Standar Akuntansi Pemerintah (SAP . Pada umumnya pencatatan akuntansi di dasarkan pada dua sistem yaitu basis kas (Cash Basis dan basis akrual (Accrual Basis. Fakta berbicara Selama ini mahasiswa masih dibinggungkan dengan perbedaan ke dua metode itu sehingga
Accelerating GW calculations with optimal polarizability basis
Energy Technology Data Exchange (ETDEWEB)
Umari, P.; Stenuit, G. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); Qian, X.; Marzari, N. [Department of Materials Science and Engineering, MIT, Cambridge, MA (United States); Giacomazzi, L.; Baroni, S. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); SISSA - Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)
2011-03-15
We present a method for accelerating GW quasi-particle (QP) calculations. This is achieved through the introduction of optimal basis sets for representing polarizability matrices. First the real-space products of Wannier like orbitals are constructed and then optimal basis sets are obtained through singular value decomposition. Our method is validated by calculating the vertical ionization energies of the benzene molecule and the band structure of crystalline silicon. Its potentialities are illustrated by calculating the QP spectrum of a model structure of vitreous silica. Finally, we apply our method for studying the electronic structure properties of a model of quasi-stoichiometric amorphous silicon nitride and of its point defects. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
Regularities of radiorace formation in yeasts
International Nuclear Information System (INIS)
Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)
1977-01-01
Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes
Regularization destriping of remote sensing imagery
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
The Regularity of Optimal Irrigation Patterns
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
Singular tachyon kinks from regular profiles
International Nuclear Information System (INIS)
Copeland, E.J.; Saffin, P.M.; Steer, D.A.
2003-01-01
We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately
Two-pass greedy regular expression parsing
DEFF Research Database (Denmark)
Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse
2013-01-01
We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...
Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Incremental projection approach of regularization for inverse problems
Energy Technology Data Exchange (ETDEWEB)
Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)
2016-10-15
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.
Dimensional regularization and analytical continuation at finite temperature
International Nuclear Information System (INIS)
Chen Xiangjun; Liu Lianshou
1998-01-01
The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
2011-01-01
The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.
Exclusion of children with intellectual disabilities from regular ...
African Journals Online (AJOL)
Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...
39 CFR 6.1 - Regular meetings, annual meeting.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
5 CFR 551.421 - Regular working hours.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...
20 CFR 226.35 - Deductions from regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...
20 CFR 226.34 - Divorced spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...
20 CFR 226.14 - Employee regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
Energy Technology Data Exchange (ETDEWEB)
Jarillo-Herrero, Pablo [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2017-02-07
This is the final report of our research program on electronic transport experiments on Topological Insulator (TI) devices, funded by the DOE Office of Basic Energy Sciences. TI-based electronic devices are attractive as platforms for spintronic applications, and for detection of emergent properties such as Majorana excitations , electron-hole condensates , and the topological magneto-electric effect . Most theoretical proposals envision geometries consisting of a planar TI device integrated with materials of distinctly different physical phases (such as ferromagnets and superconductors). Experimental realization of physics tied to the surface states is a challenge due to the ubiquitous presence of bulk carriers in most TI compounds as well as degradation during device fabrication.
Accreting fluids onto regular black holes via Hamiltonian approach
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)
2017-08-15
We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)
On the regularized fermionic projector of the vacuum
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
On the regularized fermionic projector of the vacuum
International Nuclear Information System (INIS)
Finster, Felix
2008-01-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed
MRI reconstruction with joint global regularization and transform learning.
Tanc, A Korhan; Eksioglu, Ender M
2016-10-01
Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Cheong, K; Lee, M; Kang, S; Yoon, J; Park, S; Hwang, T; Kim, H; Kim, K; Han, T; Bae, H
2014-01-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude and the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ 0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of
Energy Technology Data Exchange (ETDEWEB)
Webb, Robert C. [Texas A& M University; Kamon, Teruki [Texas A& M University; Toback, David [Texas A& M University; Safonov, Alexei [Texas A& M University; Dutta, Bhaskar [Texas A& M University; Dimitri, Nanopoulos [Texas A& M University; Pope, Christopher [Texas A& M University; White, James [Texas A& M University
2013-11-18
Overview The High Energy Physics Group at Texas A&M University is submitting this final report for our grant number DE-FG02-95ER40917. This grant has supported our wide range of research activities for over a decade. The reports contained here summarize the latest work done by our research team. Task A (Collider Physics Program): CMS & CDF Profs. T. Kamon, A. Safonov, and D. Toback co-lead the Texas A&M (TAMU) collider program focusing on CDF and CMS experiments. Task D: Particle Physics Theory Our particle physics theory task is the combined effort of Profs. B. Dutta, D. Nanopoulos, and C. Pope. Task E (Underground Physics): LUX & NEXT Profs. R. Webb and J. White(deceased) lead the Xenon-based underground research program consisting of two main thrusts: the first, participation in the LUX two-phase xenon dark matter search experiment and the second, detector R&D primarily aimed at developing future detectors for underground physics (e.g. NEXT and LZ).
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Regularization of the Coulomb scattering problem
International Nuclear Information System (INIS)
Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.
2004-01-01
The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers
Color correction optimization with hue regularization
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
Wave dynamics of regular and chaotic rays
International Nuclear Information System (INIS)
McDonald, S.W.
1983-09-01
In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Library search with regular reflectance IR spectra
International Nuclear Information System (INIS)
Staat, H.; Korte, E.H.; Lampen, P.
1989-01-01
Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)
Regularities of praseodymium oxide dissolution in acids
International Nuclear Information System (INIS)
Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.
1989-01-01
The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented
Regular expressions compiler and some applications
International Nuclear Information System (INIS)
Saldana A, H.
1978-01-01
We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)
Quantum implications of a scale invariant regularization
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
Regularities development of entrepreneurial structures in regions
Directory of Open Access Journals (Sweden)
Julia Semenovna Pinkovetskaya
2012-12-01
Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Form factors and scattering amplitudes in N=4 SYM in dimensional and massive regularizations
Energy Technology Data Exchange (ETDEWEB)
Henn, Johannes M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Moch, Sven [California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Naculich, Stephen G. [California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Bowdoin College, Brunswick, ME (United States). Dept. of Physics
2011-09-15
The IR-divergent scattering amplitudes of N=4 supersymmetric Yang-Mills theory can be regulated in a variety of ways, including dimensional regularization and massive (or Higgs) regularization. The IR-finite part of an amplitude in different regularizations generally differs by an additive constant at each loop order, due to the ambiguity in separating finite and divergent contributions. We give a prescription for defining an unambiguous, regulator-independent finite part of the amplitude by factoring off a product of IR-divergent ''wedge'' functions. For the cases of dimensional regularization and the common-mass Higgs regulator, we define the wedge function in terms of a form factor, and demonstrate the regularization independence of the n-point amplitude through two loops. We also deduce the form of the wedge function for the more general differential-mass Higgs regulator, although we lack an explicit operator definition in this case. Finally, using extended dual conformal symmetry, we demonstrate the link between the differential-mass wedge function and the anomalous dual conformal Ward identity for the finite part of the scattering amplitude. (orig.)
Form factors and scattering amplitudes in N=4 SYM in dimensional and massive regularizations
International Nuclear Information System (INIS)
Henn, Johannes M.; Naculich, Stephen G.; Bowdoin College, Brunswick, ME
2011-09-01
The IR-divergent scattering amplitudes of N=4 supersymmetric Yang-Mills theory can be regulated in a variety of ways, including dimensional regularization and massive (or Higgs) regularization. The IR-finite part of an amplitude in different regularizations generally differs by an additive constant at each loop order, due to the ambiguity in separating finite and divergent contributions. We give a prescription for defining an unambiguous, regulator-independent finite part of the amplitude by factoring off a product of IR-divergent ''wedge'' functions. For the cases of dimensional regularization and the common-mass Higgs regulator, we define the wedge function in terms of a form factor, and demonstrate the regularization independence of the n-point amplitude through two loops. We also deduce the form of the wedge function for the more general differential-mass Higgs regulator, although we lack an explicit operator definition in this case. Finally, using extended dual conformal symmetry, we demonstrate the link between the differential-mass wedge function and the anomalous dual conformal Ward identity for the finite part of the scattering amplitude. (orig.)
The effects of perceived and actual financial knowledge on regular personal savings: Case of Vietnam
Directory of Open Access Journals (Sweden)
Thi Anh Nhu Nguyen
2017-06-01
Full Text Available The paper examines the factors, which affect decision-making on regular personal saving behaviour in the context of an emerging market in Vietnam. Focusing on financial literacy, the paper uses a combined measure of actual financial knowledge and a self-assessment of overall financial knowledge. The sample of the study consists of 240 commercial banks customers selected in 12 branches of four banks in Ho Chi Minh City. The questionnaire covers: (1 actual financial knowledge; (2 self-rating of financial knowledge; (3 financial risk tolerance; and (4 demographic characteristics of the respondents. The results of a logistic regression analysis show that perceived and actual financial literacy have separate effects on regular personal saving. Particularly, actual financial knowledge has a statistically significant positive relationship with regular personal saving with odds ratio higher than 6.5 times. However, perceived financial knowledge and financial risk tolerance factor are not statistically significant with regular personal saving. Finally, this paper offers evidence that the interaction variable, which is used to combine education level with their major study, has a statistically significant relationship with regular personal saving.
Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications
Chaki, Sagar; Gurfinkel, Arie
2010-01-01
We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Elementary Particle Spectroscopy in Regular Solid Rewrite
International Nuclear Information System (INIS)
Trell, Erik
2008-01-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Oware, E. K.; Moysey, S. M.
2016-12-01
Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time
Drug-Target Interaction Prediction with Graph Regularized Matrix Factorization.
Ezzat, Ali; Zhao, Peilin; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-01-01
Experimental determination of drug-target interactions is expensive and time-consuming. Therefore, there is a continuous demand for more accurate predictions of interactions using computational techniques. Algorithms have been devised to infer novel interactions on a global scale where the input to these algorithms is a drug-target network (i.e., a bipartite graph where edges connect pairs of drugs and targets that are known to interact). However, these algorithms had difficulty predicting interactions involving new drugs or targets for which there are no known interactions (i.e., "orphan" nodes in the network). Since data usually lie on or near to low-dimensional non-linear manifolds, we propose two matrix factorization methods that use graph regularization in order to learn such manifolds. In addition, considering that many of the non-occurring edges in the network are actually unknown or missing cases, we developed a preprocessing step to enhance predictions in the "new drug" and "new target" cases by adding edges with intermediate interaction likelihood scores. In our cross validation experiments, our methods achieved better results than three other state-of-the-art methods in most cases. Finally, we simulated some "new drug" and "new target" cases and found that GRMF predicted the left-out interactions reasonably well.
Can static regular black holes form from gravitational collapse?
International Nuclear Information System (INIS)
Zhang, Yiyang; Zhu, Yiwei; Modesto, Leonardo; Bambi, Cosimo
2015-01-01
Starting from the Oppenheimer-Snyder model, we know how in classical general relativity the gravitational collapse of matter forms a black hole with a central spacetime singularity. It is widely believed that the singularity must be removed by quantum-gravity effects. Some static quantum-inspired singularity-free black hole solutions have been proposed in the literature, but when one considers simple examples of gravitational collapse the classical singularity is replaced by a bounce, after which the collapsing matter expands for ever. We may expect three possible explanations: (i) the static regular black hole solutions are not physical, in the sense that they cannot be realized in Nature, (ii) the final product of the collapse is not unique, but it depends on the initial conditions, or (iii) boundary effects play an important role and our simple models miss important physics. In the latter case, after proper adjustment, the bouncing solution would approach the static one. We argue that the ''correct answer'' may be related to the appearance of a ghost state in de Sitter spacetimes with super Planckian mass. Our black holes have indeed a de Sitter core and the ghost would make these configurations unstable. Therefore we believe that these black hole static solutions represent the transient phase of a gravitational collapse but never survive as asymptotic states. (orig.)
Beamforming Through Regularized Inverse Problems in Ultrasound Medical Imaging.
Szasz, Teodora; Basarab, Adrian; Kouame, Denis
2016-12-01
Beamforming (BF) in ultrasound (US) imaging has significant impact on the quality of the final image, controlling its resolution and contrast. Despite its low spatial resolution and contrast, delay-and-sum (DAS) is still extensively used nowadays in clinical applications, due to its real-time capabilities. The most common alternatives are minimum variance (MV) method and its variants, which overcome the drawbacks of DAS, at the cost of higher computational complexity that limits its utilization in real-time applications. In this paper, we propose to perform BF in US imaging through a regularized inverse problem based on a linear model relating the reflected echoes to the signal to be recovered. Our approach presents two major advantages: 1) its flexibility in the choice of statistical assumptions on the signal to be beamformed (Laplacian and Gaussian statistics are tested herein) and 2) its robustness to a reduced number of pulse emissions. The proposed framework is flexible and allows for choosing the right tradeoff between noise suppression and sharpness of the resulted image. We illustrate the performance of our approach on both simulated and experimental data, with in vivo examples of carotid and thyroid. Compared with DAS, MV, and two other recently published BF techniques, our method offers better spatial resolution, respectively contrast, when using Laplacian and Gaussian priors.
Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts
International Nuclear Information System (INIS)
Dixon, Lance
2002-01-01
We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg → gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard MS coupling, α s . The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD β function in the FDH and dimensional reduction schemes
Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts
Energy Technology Data Exchange (ETDEWEB)
Dixon, Lance
2002-03-08
We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg {yields} gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard {bar M}{bar S} coupling, {alpha}{sub s}. The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD {beta} function in the FDH and dimensional reduction schemes.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Thermodynamic basis for effective energy utilization
Energy Technology Data Exchange (ETDEWEB)
Rogers, J. T.
1977-10-15
A major difficulty in a quantitative assessment of effective energy utilization is that energy is always conserved (the First Law of Thermodynamics). However, the Second Law of Thermodynamics shows that, although energy cannot be destroyed, it can be degraded to a state in which it is of no further use for performing tasks. Thus, in considering the present world energy crisis, we are not really concerned with the conservation of energy but with the conservation of its ability to perform useful tasks. A measure of this ability is thermodynamic availability or, a less familiar term, exergy. In a real sense, we are concerned with an entropy-crisis, rather than an energy crisis. Analysis of energy processes on an exergy basis provides significantly different insights into the processes than those obtained from a conventional energy analysis. For example, process steam generation in an industrial boiler may appear quite efficient on the basis of a conventional analysis, but is shown to have very low effective use of energy when analyzed on an exergy basis. Applications of exergy analysis to other systems, such as large fossil and nuclear power stations, are discussed, and the benefits of extraction combined-purpose plants are demonstrated. Other examples of the application of the exergy concept in the industrial and residential energy sectors are also given. The concept is readily adaptable to economic optimization. Examples are given of economic optimization on an availability basis of an industrial heat exchanger and of a combined-purpose nuclear power and heavy-water production plant. Finally, the utility of the concept of exergy in assessing the energy requirements of an industrial society is discussed.
International Nuclear Information System (INIS)
Malykhin, V.M.; Ivanova, N.I.
1981-01-01
It is shown that when assessing the necessary periodicity of internal irradiation monitoring, it is required to take account of the nature (rhythm) of radionuclide intake to the organism during the monitoring period, the effective period of radionuclide biological half-life, its activity in the organism, sensitivity of the technique applied and the labour-consumig character of the monitoring method [ru
Near-field acoustic holography using sparse regularization and compressive sampling principles.
Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi
2012-09-01
Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.
Regular breakfast consumption is associated with increased IQ in kindergarten children.
Liu, Jianghong; Hwang, Wei-Ting; Dickerman, Barbra; Compher, Charlene
2013-04-01
Studies have documented a positive relationship between regular breakfast consumption and cognitive outcomes in youth. However, most of these studies have emphasized specific measures of cognition rather than cognitive performance as a broad construct (e.g., IQ test scores) and have been limited to Western samples of school-age children and adolescents. This study aims to extend the literature on breakfast consumption and cognition by examining these constructs in a sample of Chinese kindergarten-age children. This cross-sectional study consisted of a sample of 1269 children (697 boys and 572 girls) aged 6 years from the Chinese city of Jintan. Cognition was assessed with the Chinese version of the Wechsler preschool and primary scale of intelligence-revised. Breakfast habits were assessed through parental questionnaire. Analyses of variance and linear regression models were used to analyze the association between breakfast habits and IQ. Socioeconomic and parental psychosocial variables related to intelligence were controlled for. Findings showed that children who regularly have breakfast on a near-daily basis had significantly higher full scale, verbal, and performance IQ test scores (all pbreakfast. This relationship persisted for VIQ (verbal IQ) and FIQ (full IQ) even after adjusting for gender, current living location, parental education, parental occupation, and primary child caregiver. Findings may reflect nutritional as well as social benefits of regular breakfast consumption on cognition, and regular breakfast consumption should be encouraged among young children. Copyright © 2013 Elsevier Ltd. All rights reserved.
La Marca, Antonio; Grisendi, Valentina; Dondi, Giulia; Sighinolfi, Giovanna; Cianci, Antonio
2015-01-01
Polycystic ovary syndrome is characterized by irregular cycles, hyperandrogenism, polycystic ovary at ultrasound and insulin resistance. The effectiveness of D-chiro-inositol (DCI) treatment in improving insulin resistance in PCOS patients has been confirmed in several reports. The objective of this study was to retrospectively analyze the effect of DCI on menstrual cycle regularity in PCOS women. This was a retrospective study of patients with irregular cycles who were treated with DCI. Of all PCOS women admitted to our centre, 47 were treated with DCI and had complete medical charts. The percentage of women reporting regular menstrual cycles significantly increased with increasing duration of DCI treatment (24% and 51.6% at a mean of 6 and 15 months of treatment, respectively). Serum AMH levels and indexes of insulin resistance significantly decreased during the treatment. Low AMH levels, high HOMA index, and the presence of oligomenorrhea at the first visit were the independent predictors of obtaining regular menstrual cycle with DCI. In conclusion, the use of DCI is associated to clinical benefits for many women affected by PCOS including the improvement in insulin resistance and menstrual cycle regularity. Responders to the treatment may be identified on the basis of menstrual irregularity and hormonal or metabolic markers.
Using the laws and the regularities of public administration in the state strategic planning
Directory of Open Access Journals (Sweden)
O. L. Yevmieshkina
2016-03-01
Full Text Available The article researches the use of laws of public administration in the state strategic planning; defined a methodological basis of state strategic planning. State strategic planning as a function of public administration exists in accordance with its laws and regularities. Author established the use of public administration laws as: unity socio-economic system, required diversity, system integrity, unity techniques and basic functions of social management at all levels of public administration: central, sectorial, regional. At the public administration level this laws as a rule us in working and realization of state strategy, state, region and sectorial program, which directed to improve of political, economic and social process. State strategic planning as a function of public administration exists in accordance with its laws. The law in our research is considered as objective, substantive, necessary, sustainable relationship between events. The most essential feature of law is reflecting the objective state of affairs, objective relations between things, items and phenomenon’s. The other difficult sign of law is necessity as relation, which inevitably revealed in the development process of different things. Law category with regularity category is relation. Regularity is wider category then the law. The state strategic planning is an integrated, systematic process due to the action and use laws and regularities of public administration. That improves the efficiency of public administration.
Regularization of plurisubharmonic functions with a net of good points
Li, Long
2017-01-01
The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.
International Nuclear Information System (INIS)
Keller, Kai Johannes
2010-04-01
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Keller, Kai Johannes
2010-04-15
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Genetic basis of chronic pancreatitis
Jansen, JBMJ; Morsche, RT; van Goor, Harry; Drenth, JPH
2002-01-01
Background: Pancreatitis has a proven genetic basis in a minority of patients. Methods: Review of the literature on genetics of pancreatitis. Results: Ever since the discovery that in most patients with hereditary pancreatitis a mutation in the gene encoding for cationic trypsinogen (R122H) was
Ellipsoidal basis for isotropic oscillator
International Nuclear Information System (INIS)
Kallies, W.; Lukac, I.; Pogosyan, G.S.; Sisakyan, A.N.
1994-01-01
The solutions of the Schroedinger equation are derived for the isotropic oscillator potential in the ellipsoidal coordinate system. The explicit expression is obtained for the ellipsoidal integrals of motion through the components of the orbital moment and Demkov's tensor. The explicit form of the ellipsoidal basis is given for the lowest quantum numbers. 10 refs.; 1 tab. (author)
Molecular basis of familial hypercholesterolemia
Bruikman, Caroline S.; Hovingh, Gerard K.; Kastelein, John J. P.
2017-01-01
Purpose of review To provide an overview about the molecular basis of familial hypercholesterolemia. Recent findings Familial hypercholesterolemia is a common hereditary cause of premature coronary heart disease. It has been estimated that 1 in every 250 individuals has heterozygous familial
Basis reduction for layered lattices
E.L. Torreão Dassen (Erwin)
2011-01-01
htmlabstractWe develop the theory of layered Euclidean spaces and layered lattices. With this new theory certain problems that usually are solved by using classical lattices with a "weighting" gain a new, more natural form. Using the layered lattice basis reduction algorithms introduced here these
Mixtures of truncated basis functions
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2012-01-01
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the mixture of polynomials (MoPs) framework. Similar t...
Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.
Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces
Regularities of synthesis and mechanism of polycondensation of aromatic amines
International Nuclear Information System (INIS)
Matnishyan, Hagob
2002-01-01
Full text.Aniline polymers and its derivatives are widely used in modern electronics, electrical engineering and manufacturing of various appliances. They are used for production of electrical power sources, probes and sensors, composite materials absorbing high frequency radiations, anticorrosion coatings, nonlinear optical devices-such as lasers, cathode ray tubes, photodiodes etc. Such a wide usage of aromatic amine polymers brings up new demands to their structure and properties, which is dependent on conditions of synthesis and forming of the hard phase. The presented article describes regularities and mechanisms of oxidative polycondensation of aromatic amines. Several types of polymers have been synthesized by chemical and electrochemical oxidation of aniline and its chlor-, brom-, iodo-, nitro-, p-substituted derivatives; diphenylamine, benzidine and phenylenediamines in nonwater media. On the basis of kinetic and electrochemical studies and literature analysis we suggested a mechanism of polycondensation of aromatic amines. According to it, oxidation of amines starts with the electron transfer with cation-radical formation on the first stage, which stabilizes in acid environments due to complex formation with initial amine. Dimer formation and further growth of chain takes place upon another electron transfer from formed complex, which results in forming of macromolecules. We also suggested a scheme for obtaining of structures defect in media assisting in deprotonizing of cation radicals and formation of arylamine radical centers. Those processes lead to formation of azo- and diphenyl fragments in the main chain of the polymer and predetermine the possibility of chain disruption. We also considered reactions leading to formation of branched polymers and cyclic structures, such as phenazine in particular. The peculiarity of electrochemical process lies in regulation of concentration of active centres on the positive electrode surface
Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin
2017-11-01
To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.
Regular Breakfast and Blood Lead Levels among Preschool Children
Directory of Open Access Journals (Sweden)
Needleman Herbert
2011-04-01
Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.
Basis UST leak detection systems
International Nuclear Information System (INIS)
Silveria, V.
1992-01-01
This paper reports that gasoline and other petroleum products are leaking from underground storage tanks (USTs) at an alarming rate, seeping into soil and groundwater. Buried pipes are an even greater culprit, accounting for most suspected and detected leaks according to Environmental Protection Agency (EPA) estimates. In response to this problem, the EPA issued regulations setting standards for preventing, detecting, reporting, and cleaning up leaks, as well as fiscal responsibility. However, federal regulations are only a minimum; some states have cracked down even harder Plant managers and engineers have a big job ahead of them. The EPA estimates that there are more than 75,000 fuel USTs at US industrial facilities. When considering leak detection systems, the person responsible for making the decision has five primary choices: inventory reconciliation combined with regular precision tightness tests; automatic tank gauging; groundwater monitoring; interstitial monitoring of double containment systems; and vapor monitoring
On the necessary conditions of the regular minimum of the scale factor of the co-moving space
International Nuclear Information System (INIS)
Agakov, V.G.
1980-01-01
In the framework of homogeneous cosmologic model studied is the behaviour of the comoving space element volume filled with barotropous medium, deprived of energy fluxes. Presented are the necessary conditions at which a regular final minimum of the scale factor of the co-mowing space may take place. It is found that to carry out the above minimum at values of cosmological constant Λ <= 0 the presence of two from three anisotropy factors is necessary. Anisotropy of space deformation should be one of these factors. In case of Λ <= 0 the regular minimum is also possible if all three factors of anisotropy are equal to zero. However if none of the factors of Fsub(i), Asub(ik) anisotropy is equal to zero, the presence of deformation space anisotropy is necessary for final regular minimum appearance
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
Tur\\'an type inequalities for regular Coulomb wave functions
Baricz, Árpád
2015-01-01
Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.
Regularization and Complexity Control in Feed-forward Networks
Bishop, C. M.
1995-01-01
In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.
Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces
F. Vallentin (Frank)
2008-01-01
htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite
Degree-regular triangulations of torus and Klein bottle
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.
Adaptive Regularization of Neural Networks Using Conjugate Gradient
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...
Strictly-regular number system and data structures
DEFF Research Database (Denmark)
Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki
2010-01-01
We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...
Inclusion Professional Development Model and Regular Middle School Educators
Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo
2014-01-01
The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…
The equivalence problem for LL- and LR-regular grammars
Nijholt, Antinus; Gecsec, F.
It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular
The Effects of Regular Exercise on the Physical Fitness Levels
Kirandi, Ozlem
2016-01-01
The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…
Regular perturbations in a vector space with indefinite metric
International Nuclear Information System (INIS)
Chiang, C.C.
1975-08-01
The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)
Pairing renormalization and regularization within the local density approximation
International Nuclear Information System (INIS)
Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.
2006-01-01
We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications
Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears
Chen, Sau-Chin; Hu, Jon-Fan
2015-01-01
Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…
Regularity conditions of the field on a toroidal magnetic surface
International Nuclear Information System (INIS)
Bouligand, M.
1985-06-01
We show that a field B vector which is derived from an analytic canonical potential on an ordinary toroidal surface is regular on this surface when the potential satisfies an elliptic equation (owing to the conservative field) subject to certain conditions of regularity of its coefficients [fr
47 CFR 76.614 - Cable television system regular monitoring.
2010-10-01
...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...
Analysis of regularized Navier-Stokes equations, 2
Ou, Yuh-Roung; Sritharan, S. S.
1989-01-01
A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.
Authorization basis requirements comparison report
Energy Technology Data Exchange (ETDEWEB)
Brantley, W.M.
1997-08-18
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.
Authorization basis requirements comparison report
International Nuclear Information System (INIS)
Brantley, W.M.
1997-01-01
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation
Application of Littlewood-Paley decomposition to the regularity of Boltzmann type kinetic equations
International Nuclear Information System (INIS)
EL Safadi, M.
2007-03-01
We study the regularity of kinetic equations of Boltzmann type.We use essentially Littlewood-Paley method from harmonic analysis, consisting mainly in working with dyadics annulus. We shall mainly concern with the homogeneous case, where the solution f(t,x,v) depends only on the time t and on the velocities v, while working with realistic and singular cross-sections (non cutoff). In the first part, we study the particular case of Maxwellian molecules. Under this hypothesis, the structure of the Boltzmann operator and his Fourier transform write in a simple form. We show a global C ∞ regularity. Then, we deal with the case of general cross-sections with 'hard potential'. We are interested in the Landau equation which is limit equation to the Boltzmann equation, taking in account grazing collisions. We prove that any weak solution belongs to Schwartz space S. We demonstrate also a similar regularity for the case of Boltzmann equation. Let us note that our method applies directly for all dimensions, and proofs are often simpler compared to other previous ones. Finally, we finish with Boltzmann-Dirac equation. In particular, we adapt the result of regularity obtained in Alexandre, Desvillettes, Wennberg and Villani work, using the dissipation rate connected with Boltzmann-Dirac equation. (author)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
On the regularity of the covariance matrix of a discretized scalar field on the sphere
Energy Technology Data Exchange (ETDEWEB)
Bilbao-Ahedo, J.D. [Departamento de Física Moderna, Universidad de Cantabria, Av. los Castros s/n, 39005 Santander (Spain); Barreiro, R.B.; Herranz, D.; Vielva, P.; Martínez-González, E., E-mail: bilbao@ifca.unican.es, E-mail: barreiro@ifca.unican.es, E-mail: herranz@ifca.unican.es, E-mail: vielva@ifca.unican.es, E-mail: martinez@ifca.unican.es [Instituto de Física de Cantabria (CSIC-UC), Av. los Castros s/n, 39005 Santander (Spain)
2017-02-01
We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizations that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.
Hanford Generic Interim Safety Basis
Energy Technology Data Exchange (ETDEWEB)
Lavender, J.C.
1994-09-09
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports.
Hanford Generic Interim Safety Basis
International Nuclear Information System (INIS)
Lavender, J.C.
1994-01-01
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports
Intracellular recovery - basis of hyperfractionation
International Nuclear Information System (INIS)
Hagen, U.; Guttenberger, R.; Kummermehr, J.
1988-01-01
The radiobiological basis fo a hyperfractionated radiation therapy versus conventional fractionation with respect to therapeutic gain, i.e., improved normal tissue sparing for the same level of tumour cell inactivation, will be presented. Data on the recovery potential of various tissues as well as the kinetics of repair will be given. The problem of incomplete repair with short irradiation intervals will be discussed. (orig.) [de
Genetic basis of atrial fibrillation
Directory of Open Access Journals (Sweden)
Oscar Campuzano
2016-12-01
Full Text Available Atrial fibrillation is the most common sustained arrhythmia and remains as one of main challenges in current clinical practice. The disease may be induced secondary to other diseases such as hypertension, valvular heart disease, and heart failure, conferring an increased risk of stroke and sudden death. Epidemiological studies have provided evidence that genetic factors play an important role and up to 30% of clinically diagnosed patients may have a family history of atrial fibrillation. To date, several rare variants have been identified in a wide range of genes associated with ionic channels, calcium handling protein, fibrosis, conduction and inflammation. Important advances in clinical, genetic and molecular basis have been performed over the last decade, improving diagnosis and treatment. However, the genetics of atrial fibrillation is complex and pathophysiological data remains still unraveling. A better understanding of the genetic basis will induce accurate risk stratification and personalized clinical treatment. In this review, we have focused on current genetics basis of atrial fibrillation.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Optimal behaviour can violate the principle of regularity.
Trimmer, Pete C
2013-07-22
Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).
Laplacian manifold regularization method for fluorescence molecular tomography
He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei
2017-04-01
Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.
Munushian, Jack
In 1972, the University of Southern California School of Engineering established a 4-channel interactive instructional television network. It was designed to allow employees of participating industries to take regular university science and engineering courses and special continuing education courses at or near their work locations. Final progress…
Basis for calculations in the topological expansion
International Nuclear Information System (INIS)
Levinson, M.A.
1982-12-01
Investigations aimed at putting the topological theory of particles on a more quantitative basis are described. First, the incorporation of spin into the topological structure is discussed and shown to successfully reproduce the observed lowest mass hadron spectrum. The absence of parity-doubled states represents a significant improvement over previous efforts in similar directions. This theory is applied to the lowest order calculation of elementary hadron coupling constant ratios. SU(6)/sub W/ symmetry is maintained and extended via the notions of topological supersymmetry and universality. Finally, efforts to discover a perturbative basis for the topological expansion are described. This has led to the formulation of off-shell Feynman-like rules which provide a calculational scheme for the strong interaction components of the topological expansion once the zero-entropy connected parts are known. These rules are shown to imply a topological asymptotic freedom. Even though the nonlinear zero-entropy problem cannot itself be treated perturbatively, plausible general assumptions about zero-entropy amplitudes allow immediate qualitative inferences concerning physical hadrons. In particular, scenarios for mass splittings beyond the supersymmetric level are described
340 Waste Handling Facility interim safety basis
International Nuclear Information System (INIS)
Bendixsen, R.B.
1995-01-01
This document establishes the interim safety basis (ISB) for the 340 Waste Handling Facility (340 Facility). An ISB is a documented safety basis that provides a justification for the continued operation of the facility until an upgraded final safety analysis report is prepared that complies with US Department of Energy (DOE) Order 5480.23, Nuclear Safety Analysis Reports. The ISB for the 340 Facility documents the current design and operation of the facility. The 340 Facility ISB (ISB-003) is based on a facility walkdown and review of the design and operation of the facility, as described in the existing safety documentation. The safety documents reviewed, to develop ISB-003, include the following: OSD-SW-153-0001, Operating Specification Document for the 340 Waste Handling Facility (WHC 1990); OSR-SW-152-00003, Operating Limits for the 340 Waste Handling Facility (WHC 1989); SD-RE-SAP-013, Safety Analysis Report for Packaging, Railroad Liquid Waste Tank Cars (Mercado 1993); SD-WM-TM-001, Safety Assessment Document for the 340 Waste Handling Facility (Berneski 1994a); SD-WM-SEL-016, 340 Facility Safety Equipment List (Berneski 1992); and 340 Complex Fire Hazard Analysis, Draft (Hughes Assoc. Inc. 1994)
Tikhonov regularization method for the numerical inversion of Mellin transforms using splines
International Nuclear Information System (INIS)
Iqbal, M.
2005-01-01
Mellin transform is an ill-posed problem. These problems arise in many branches of science and engineering. In the typical situation one is interested in recovering the original function, given a finite number of noisy measurements of data. In this paper, we shall convert Mellin transform to Laplace transform and then an integral equation of the first kind of convolution type. We solve the integral equation using Tikhonov regularization with splines as basis function. The method is applied to various test examples in the literature and results are shown in the table
Directory of Open Access Journals (Sweden)
Danilo Bojanić
2016-02-01
Full Text Available The aim of the research is to determine the level of quantitative changes of motor abilities of pupils with special needs under the influence of kinetic activity regular physical education teaching. The survey was conducted on students of the Centre for children and youth with special needs in Mostar, the city of Los Rosales in Mostar and day care facilities for children with special needs in Niksic. The sample was composed of boys of 46 subjects, who were involved in regular physical education for a period of one school year. The level of quantitative and qualitative changes in motor skills, written under the influence of kinesiology operators within regular school physical education classes, was estimated by applying appropriate tests of motor skills, selected in accordance with the degree of mental ability and biological age. Manifest variables applied in this experiment were processed using standard descriptive methods in order to determine their distribution function and basic function parameters. Comparisons of results of measures of central dispersion parameters initial and final measurement, it is evident that the applied program of physical education and sport contribute to changing the distribution of central and dispersion parameters, and that the same distribution of the final measurement closer to the normal distribution of results.
Energy Technology Data Exchange (ETDEWEB)
Kemski, J.; Klingel, R.; Siehl, A.; Neznal, M.; Matolin, M.
2012-03-15
The final report on report on ground air radon measurements includes the following chapters: Scope of the research program; Concept of the research project; Development of a passive method for ground air measurement; Sampling and measuring methods; Measured areas; Field measurements; results of geophysical investigations: Castle Lede, Messdorfer Field; Lounovice; Procedural method; Results of ground air radon concentration measurements, meteorological and geophysical parameters; Evaluation, discussion and conclusions.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Thermodynamic basis for cluster kinetics
DEFF Research Database (Denmark)
Hu, Lina; Bian, Xiufang; Qin, Xubo
2006-01-01
Due to the inaccessibility of the supercooled region of marginal metallic glasses (MMGs) within the experimental time window, we study the cluster kinetics above the liquidus temperature, Tl, to acquire information on the fragility of the MMG systems. Thermodynamic basis for the stability...... of locally ordered structure in the MMG liquids is discussed in terms of the two-order-parameter model. It is found that the Arrhenius activation energy of clusters, h, is proportional to the chemical mixing enthalpy of alloys, Hchem. Fragility of the MMG forming liquids can be described by the ratio...
OSR encapsulation basis -- 100-KW
International Nuclear Information System (INIS)
Meichle, R.H.
1995-01-01
The purpose of this report is to provide the basis for a change in the Operations Safety Requirement (OSR) encapsulated fuel storage requirements in the 105 KW fuel storage basin which will permit the handling and storing of encapsulated fuel in canisters which no longer have a water-free space in the top of the canister. The scope of this report is limited to providing the change from the perspective of the safety envelope (bases) of the Safety Analysis Report (SAR) and Operations Safety Requirements (OSR). It does not change the encapsulation process itself
The physical basis of chemistry
Warren, Warren S
2000-01-01
If the text you're using for general chemistry seems to lack sufficient mathematics and physics in its presentation of classical mechanics, molecular structure, and statistics, this complementary science series title may be just what you're looking for. Written for the advanced lower-division undergraduate chemistry course, The Physical Basis of Chemistry, Second Edition, offers students an opportunity to understand and enrich the understanding of physical chemistry with some quantum mechanics, the Boltzmann distribution, and spectroscopy. Posed and answered are questions concerning eve
Propagation of spiking regularity and double coherence resonance in feedforward networks.
Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok
2012-03-01
We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.
Regularization in Hilbert space under unbounded operators and general source conditions
International Nuclear Information System (INIS)
Hofmann, Bernd; Mathé, Peter; Von Weizsäcker, Heinrich
2009-01-01
The authors study ill-posed equations with unbounded operators in Hilbert space. This setup has important applications, but only a few theoretical studies are available. First, the question is addressed and answered whether every element satisfies some general source condition with respect to a given self-adjoint unbounded operator. This generalizes a previous result from Mathé and Hofmann (2008 Inverse Problems 24 015009). The analysis then proceeds to error bounds for regularization, emphasizing some specific points for regularization under unbounded operators. The study finally reviews two examples within the light of the present study, as these are fractional differentiation and some Cauchy problems for the Helmholtz equation, both studied previously and in more detail by U Tautenhahn and co-authors
International Nuclear Information System (INIS)
Zhao Gang-Ling; Chen Li-Qun; Fu Jing-Li; Hong Fang-Yu
2013-01-01
In this paper, Noether symmetry and Mei symmetry of discrete nonholonomic dynamical systems with regular and the irregular lattices are investigated. Firstly, the equations of motion of discrete nonholonomic systems are introduced for regular and irregular lattices. Secondly, for cases of the two lattices, based on the invariance of the Hamiltomian functional under the infinitesimal transformation of time and generalized coordinates, we present the quasi-extremal equation, the discrete analogues of Noether identity, Noether theorems, and the Noether conservation laws of the systems. Thirdly, in cases of the two lattices, we study the Mei symmetry in which we give the discrete analogues of the criterion, the theorem, and the conservative laws of Mei symmetry for the systems. Finally, an example is discussed for the application of the results
Surface interpolation with radial basis functions for medical imaging
International Nuclear Information System (INIS)
Carr, J.C.; Beatson, R.K.; Fright, W.R.
1997-01-01
Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill
Closedness type regularity conditions in convex optimization and beyond
Directory of Open Access Journals (Sweden)
Sorin-Mihai Grad
2016-09-01
Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.
Capped Lp approximations for the composite L0 regularization problem
Li, Qia; Zhang, Na
2017-01-01
The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out tha...
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Fluctuations of quantum fields via zeta function regularization
International Nuclear Information System (INIS)
Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio
2002-01-01
Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed
Low-Rank Matrix Factorization With Adaptive Graph Regularizer.
Lu, Gui-Fu; Wang, Yong; Zou, Jian
2016-05-01
In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.
Regularization theory for ill-posed problems selected topics
Lu, Shuai
2013-01-01
Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs
Stull, Mamie C; Strilka, Richard J; Clemens, Michael S; Armen, Scott B
2015-06-30
Optimal management of non-critically ill patients with diabetes maintained on continuous enteral feeding (CEN) is poorly defined. Subcutaneous (SQ) lispro and SQ regular insulin were compared in a simulated type 1 and type 2 diabetic patient receiving CEN. A glucose-insulin feedback mathematical model was employed to simulate type 1 and type 2 diabetic patients on CEN. Each patient received 25 SQ injections of regular insulin or insulin lispro, ranging from 0-6 U. Primary endpoints were the change in mean glucose concentration (MGC) and change in glucose variability (GV); hypoglycemic episodes were also reported. The model was first validated against patient data. Both SQ insulin preparations linearly decreased MGC, however, SQ regular insulin decreased GV whereas SQ lispro tended to increase GV. Hourly glucose concentration measurements were needed to capture the increase in GV. In the type 2 diabetic patient, "rebound hyperglycemia" occurred after SQ lispro was rapidly metabolized. Although neither SQ insulin preparation caused hypoglycemia, SQ lispro significantly lowered MGC compared to SQ regular insulin. Thus, it may be more likely to cause hypoglycemia. Analyses of the detailed glucose concentration versus time data suggest that the inferior performance of lispro resulted from its shorter duration of action. Finally, the effects of both insulin preparations persisted beyond their duration of actions in the type 2 diabetic patient. Subcutaneous regular insulin may be the short-acting insulin preparation of choice for this subset of diabetic patients. Clinical trial is required before a definitive recommendation can be made. © 2015 Diabetes Technology Society.
Stull, Mamie C.; Strilka, Richard J.; Clemens, Michael S.; Armen, Scott B.
2015-01-01
Background: Optimal management of non–critically ill patients with diabetes maintained on continuous enteral feeding (CEN) is poorly defined. Subcutaneous (SQ) lispro and SQ regular insulin were compared in a simulated type 1 and type 2 diabetic patient receiving CEN. Method: A glucose-insulin feedback mathematical model was employed to simulate type 1 and type 2 diabetic patients on CEN. Each patient received 25 SQ injections of regular insulin or insulin lispro, ranging from 0-6 U. Primary endpoints were the change in mean glucose concentration (MGC) and change in glucose variability (GV); hypoglycemic episodes were also reported. The model was first validated against patient data. Results: Both SQ insulin preparations linearly decreased MGC, however, SQ regular insulin decreased GV whereas SQ lispro tended to increase GV. Hourly glucose concentration measurements were needed to capture the increase in GV. In the type 2 diabetic patient, “rebound hyperglycemia” occurred after SQ lispro was rapidly metabolized. Although neither SQ insulin preparation caused hypoglycemia, SQ lispro significantly lowered MGC compared to SQ regular insulin. Thus, it may be more likely to cause hypoglycemia. Analyses of the detailed glucose concentration versus time data suggest that the inferior performance of lispro resulted from its shorter duration of action. Finally, the effects of both insulin preparations persisted beyond their duration of actions in the type 2 diabetic patient. Conclusions: Subcutaneous regular insulin may be the short-acting insulin preparation of choice for this subset of diabetic patients. Clinical trial is required before a definitive recommendation can be made. PMID:26134836
Quadratic Hedging of Basis Risk
Directory of Open Access Journals (Sweden)
Hardy Hulley
2015-02-01
Full Text Available This paper examines a simple basis risk model based on correlated geometric Brownian motions. We apply quadratic criteria to minimize basis risk and hedge in an optimal manner. Initially, we derive the Föllmer–Schweizer decomposition for a European claim. This allows pricing and hedging under the minimal martingale measure, corresponding to the local risk-minimizing strategy. Furthermore, since the mean-variance tradeoff process is deterministic in our setup, the minimal martingale- and variance-optimal martingale measures coincide. Consequently, the mean-variance optimal strategy is easily constructed. Simple pricing and hedging formulae for put and call options are derived in terms of the Black–Scholes formula. Due to market incompleteness, these formulae depend on the drift parameters of the processes. By making a further equilibrium assumption, we derive an approximate hedging formula, which does not require knowledge of these parameters. The hedging strategies are tested using Monte Carlo experiments, and are compared with results achieved using a utility maximization approach.
Energy Technology Data Exchange (ETDEWEB)
Cheong, K; Lee, M; Kang, S; Yoon, J; Park, S; Hwang, T; Kim, H; Kim, K; Han, T; Bae, H [Hallym University College of Medicine, Anyang (Korea, Republic of)
2014-06-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude and the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined
Babic, U; Opric, D; Perovic, M; Dmitrovic, A; MihailoviC, S; Kocijancic, D; Radakovic, J; Dugalic, M Gojnic
2015-01-01
T0 investigate how the regularity of checkups in pregnancy influences maternal behavior regarding habits in prevention of urinary tract infection (UTI), the level of information, and finally the prevalence of asymptomatic bacteriuria (AB). This study included 223 women with regular and 220 women with irregular checkups in pregnancy were given the questionnaire on the following issues: frequency of sexual intercourses during pregnancy, the regularity of bathing and changing of underwear, the direction of washing the genital region after urinating, the regularity of antenatal visits to gynecologist, and the subjective experience concerning the quality of the information received by the healthcare provider. AB was present significantly more frequent in group of participants with irregular controls during pregnancy compared to group with regular checkups in pregnancy. The prevalence of AB was higher in those women who had irregular prenatal checkups. Maternal behaviors related with the risk of urinary infections are more frequent among women with irregular prenatal care. Results of the present study emphasize the importance of regular prenatal care in AB prevention.
Gauchard, Gérome C; Gangloff, Pierre; Jeandel, Claude; Perrin, Philippe P
2003-09-01
Balance disorders increase considerably with age due to a decrease in posture regulation quality, and are accompanied by a higher risk of falling. Conversely, physical activities have been shown to improve the quality of postural control in elderly individuals and decrease the number of falls. The aim of this study was to evaluate the impact of two types of exercise on the visual afferent and on the different parameters of static balance regulation. Static postural control was evaluated in 44 healthy women aged over 60 years. Among them, 15 regularly practiced proprioceptive physical activities (Group I), 12 regularly practiced bioenergetic physical activities (Group II), and 18 controls walked on a regular basis (Group III). Group I participants displayed lower sway path and area values, whereas Group III participants displayed the highest, both in eyes-open and eyes-closed conditions. Group II participants displayed intermediate values, close to those of Group I in the eyes-open condition and those of Group III in the eyes-closed condition. Visual afferent contribution was more pronounced for Group II and III participants than for Group I participants. Proprioceptive exercise appears to have the best impact on balance regulation and precision. Besides, even if bioenergetic activity improves postural control in simple postural tasks, more difficult postural tasks show that this type of activity does not develop a neurosensorial proprioceptive input threshold as well, probably on account of the higher contribution of visual afferent.
A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.
Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong
2015-12-01
Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.
On the theory of drainage area for regular and non-regular points
Bonetti, S.; Bragg, A. D.; Porporato, A.
2018-03-01
The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi
2014-01-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both
A Regularized Algorithm for the Proximal Split Feasibility Problem
Directory of Open Access Journals (Sweden)
Zhangsong Yao
2014-01-01
Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
Anaemia in Patients with Diabetes Mellitus attending regular ...
African Journals Online (AJOL)
Anaemia in Patients with Diabetes Mellitus attending regular Diabetic ... Nigerian Journal of Health and Biomedical Sciences ... some patients may omit important food items in their daily diet for fear of increasing their blood sugar level.
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong
2015-09-18
In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.
Body composition, disordered eating and menstrual regularity in a ...
African Journals Online (AJOL)
Body composition, disordered eating and menstrual regularity in a group of South African ... e between body composition and disordered eating in irregular vs normal menstruating athletes. ... measured by air displacement plethysmography.
A new approach to nonlinear constrained Tikhonov regularization
Ito, Kazufumi; Jin, Bangti
2011-01-01
operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a
Supporting primary school teachers in differentiating in the regular classroom
Eysink, Tessa H.S.; Hulsbeek, Manon; Gijlers, Hannie
Many primary school teachers experience difficulties in effectively differentiating in the regular classroom. This study investigated the effect of the STIP-approach on teachers' differentiation activities and self-efficacy, and children's learning outcomes and instructional value. Teachers using
Lavrentiev regularization method for nonlinear ill-posed problems
International Nuclear Information System (INIS)
Kinh, Nguyen Van
2002-10-01
In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)
Regularized plane-wave least-squares Kirchhoff migration
Wang, Xin; Dai, Wei; Schuster, Gerard T.
2013-01-01
A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2015-01-01
plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing
10 CFR 830.202 - Safety basis.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Safety basis. 830.202 Section 830.202 Energy DEPARTMENT OF ENERGY NUCLEAR SAFETY MANAGEMENT Safety Basis Requirements § 830.202 Safety basis. (a) The contractor responsible for a hazard category 1, 2, or 3 DOE nuclear facility must establish and maintain the safety basis...
Regularization method for solving the inverse scattering problem
International Nuclear Information System (INIS)
Denisov, A.M.; Krylov, A.S.
1985-01-01
The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table
Viscous Regularization of the Euler Equations and Entropy Principles
Guermond, Jean-Luc
2014-03-11
This paper investigates a general class of viscous regularizations of the compressible Euler equations. A unique regularization is identified that is compatible with all the generalized entropies, à la [Harten et al., SIAM J. Numer. Anal., 35 (1998), pp. 2117-2127], and satisfies the minimum entropy principle. A connection with a recently proposed phenomenological model by [H. Brenner, Phys. A, 370 (2006), pp. 190-224] is made. © 2014 Society for Industrial and Applied Mathematics.
Dimensional versus lattice regularization within Luescher's Yang Mills theory
International Nuclear Information System (INIS)
Diekmann, B.; Langer, M.; Schuette, D.
1993-01-01
It is pointed out that the coefficients of Luescher's effective model space Hamiltonian, which is based upon dimensional regularization techniques, can be reproduced by applying folded diagram perturbation theory to the Kogut Susskind Hamiltonian and by performing a lattice continuum limit (keeping the volume fixed). Alternative cutoff regularizations of the Hamiltonian are in general inconsistent, the critical point beeing the correct prediction for Luescher's tadpole coefficient which is formally quadratically divergent and which has to become a well defined (negative) number. (orig.)
Left regular bands of groups of left quotients
International Nuclear Information System (INIS)
El-Qallali, A.
1988-10-01
A semigroup S which has a left regular band of groups as a semigroup of left quotients is shown to be the semigroup which is a left regular band of right reversible cancellative semigroups. An alternative characterization is provided by using spinned products. These results are applied to the case where S is a superabundant whose set of idempotents forms a left normal band. (author). 13 refs
Human visual system automatically encodes sequential regularities of discrete events.
Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki
2010-06-01
For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential
Estimation of the global regularity of a multifractional Brownian motion
DEFF Research Database (Denmark)
Lebovits, Joachim; Podolskij, Mark
This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....
Regularization of the quantum field theory of charges and monopoles
International Nuclear Information System (INIS)
Panagiotakopoulos, C.
1981-09-01
A gauge invariant regularization procedure for quantum field theories of electric and magnetic charges based on Zwanziger's local formulation is proposed. The bare regularized full Green's functions of gauge invariant operators are shown to be Lorentz invariant. This would have as a consequence the Lorentz invariance of the finite Green's functions that might result after any reasonable subtraction if such a subtraction can be found. (author)
Borderline personality disorder and regularly drinking alcohol before sex.
Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S
2017-07-01
Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol
The Impact of Computerization on Regular Employment (Japanese)
SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro
2004-01-01
This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...
Analytic regularization of the Yukawa model at finite temperature
International Nuclear Information System (INIS)
Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.
1996-07-01
It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs
Internal dosimetry technical basis manual
Energy Technology Data Exchange (ETDEWEB)
1990-12-20
The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophical discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
Energy Technology Data Exchange (ETDEWEB)
KRIPPS, L.J.
2005-02-18
This document describes the qualitative evaluation of frequency and consequences for double shell tank (DST) and single shell tank (SST) representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant SSCs and/or TSRS were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support of the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
Energy Technology Data Exchange (ETDEWEB)
KRIPPS, L.J.
2005-03-03
This document describes the qualitative evaluation of frequency and consequences for DST and SST representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant structures, systems and components (SSCs) and/or technical safety requirements (TSRs) were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support WP-13033, Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
Internal dosimetry technical basis manual
International Nuclear Information System (INIS)
1990-01-01
The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophical discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs
The neurological basis of occupation.
Gutman, Sharon A; Schindler, Victoria P
2007-01-01
The purpose of the present paper was to survey the literature about the neurological basis of human activity and its relationship to occupation and health. Activities related to neurological function were organized into three categories: those that activate the brain's reward system; those that promote the relaxation response; and those that preserve cognitive function into old age. The results from the literature review correlating neurological evidence and activities showed that purposeful and meaningful activities could counter the effects of stress-related diseases and reduce the risk for dementia. Specifically, it was found that music, drawing, meditation, reading, arts and crafts, and home repairs, for example, can stimulate the neurogical system and enhance health and well-being, Prospective research studies are needed to examine the effects of purposeful activities on reducing stress and slowing the rate of cognitive decline.
Infective basis in childhood leukaemia
International Nuclear Information System (INIS)
Kinlen, Leo
1995-01-01
Leukaemia in children has often been suspected of having an infective basis (as specifically identified in certain animal species) but, until recently, formal studies had gone no further than to show that it does not markedly cluster in time and space. Many infective illnesses, however, are uncommon responses to infections that are mainly spread by the majority of infected individuals who are not ill. These include, for example, glandular fever and certain types of meningitis. Such illnesses are not contagious and do not normally cluster. The possibilities that childhood leukamia might belong to this class of infective illnesses and be subject to increases in incidence as a result of epidemics of an underlying infection were suggested by the well-known excesses near Sellafield and Dounreay. (author)
Molecular basis for mitochondrial signaling
2017-01-01
This book covers recent advances in the study of structure, function, and regulation of metabolite, protein and ion translocating channels, and transporters in mitochondria. A wide array of cutting-edge methods are covered, ranging from electrophysiology and cell biology to bioinformatics, as well as structural, systems, and computational biology. At last, the molecular identity of two important channels in the mitochondrial inner membrane, the mitochondrial calcium uniporter and the mitochondrial permeability transition pore have been established. After years of work on the physiology and structure of VDAC channels in the mitochondrial outer membrane, there have been multiple discoveries on VDAC permeation and regulation by cytosolic proteins. Recent breakthroughs in structural studies of the mitochondrial cholesterol translocator reveal a set of novel unexpected features and provide essential clues for defining therapeutic strategies. Molecular Basis for Mitochondrial Signaling covers these and many more re...
Image degradation characteristics and restoration based on regularization for diffractive imaging
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
The relationship between lifestyle regularity and subjective sleep quality
Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.
2003-01-01
In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.
Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions
International Nuclear Information System (INIS)
Lin, Hongxia; Du, Lili
2013-01-01
In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig
2017-10-18
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing
2014-06-02
Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.
Regularity in an environment produces an internal torque pattern for biped balance control.
Ito, Satoshi; Kawasaki, Haruhisa
2005-04-01
In this paper, we present a control method for achieving biped static balance under unknown periodic external forces whose periods are only known. In order to maintain static balance adaptively in an uncertain environment, it is essential to have information on the ground reaction forces. However, when the biped is exposed to a steady environment that provides an external force periodically, uncertain factors on the regularity with respect to a steady environment are gradually clarified using learning process, and finally a torque pattern for balancing motion is acquired. Consequently, static balance is maintained without feedback from ground reaction forces and achieved in a feedforward manner.
A study on regularization parameter choice in near-field acoustical holography
DEFF Research Database (Denmark)
Gomes, Jesper; Hansen, Per Christian
2008-01-01
a regularization parameter. These parameter choice methods (PCMs) are attractive, since they require no a priori knowledge about the noise. However, there seems to be no clear understanding of when one PCM is better than the other. This paper presents comparisons of three PCMs: GCV, L-curve and Normalized......), and the Equivalent Source Method (ESM). All combinations of the PCMs and the NAH methods are investigated using simulated measurements with different types of noise added to the input. Finally, the comparisons are carried out for a practical experiment. This aim of this work is to create a better understanding...... of which mechanisms that affect the performance of the different PCMs....
A Domain-Specific Languane for Regular Sets of Strings and Trees
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Klarlund, Nils
1999-01-01
We propose a new high-level progr amming notation, called FIDO, that we have designed to concisely express regular sets of strings or trees. In particular, it can be viewed as a domain-specific language for the expression of finite-state automata on large alphabets (of sometimes astronomical size......, called the Monadic Second-order Logic (M2L) on trees. FIDO is translated first into pure M2L via suitable encodings, and finally into finite-state automata through the MONA tool....
Fibonacci-regularization method for solving Cauchy integral equations of the first kind
Directory of Open Access Journals (Sweden)
Mohammad Ali Fariborzi Araghi
2017-09-01
Full Text Available In this paper, a novel scheme is proposed to solve the first kind Cauchy integral equation over a finite interval. For this purpose, the regularization method is considered. Then, the collocation method with Fibonacci base function is applied to solve the obtained second kind singular integral equation. Also, the error estimate of the proposed scheme is discussed. Finally, some sample Cauchy integral equations stem from the theory of airfoils in fluid mechanics are presented and solved to illustrate the importance and applicability of the given algorithm. The tables in the examples show the efficiency of the method.