WorldWideScience

Sample records for bright point model

  1. Bright point study

    International Nuclear Information System (INIS)

    Tang, F.; Harvey, K.; Bruner, M.; Kent, B.; Antonucci, E.

    1982-01-01

    Transition region and coronal observations of bright points by instruments aboard the Solar Maximum Mission and high resolution photospheric magnetograph observations on September 11, 1980 are presented. A total of 31 bipolar ephemeral regions were found in the photosphere from birth in 9.3 hours of combined magnetograph observations from three observatories. Two of the three ephemeral regions present in the field of view of the Ultraviolet Spectrometer-Polarimeter were observed in the C IV 1548 line. The unobserved ephemeral region was determined to be the shortest-lived (2.5 hr) and lowest in magnetic flux density (13G) of the three regions. The Flat Crystal Spectrometer observed only low level signals in the O VIII 18.969 A line, which were not statistically significant to be positively identified with any of the 16 ephemeral regions detected in the photosphere. In addition, the data indicate that at any given time there lacked a one-to-one correspondence between observable bright points and photospheric ephemeral regions, while more ephemeral regions were observed than their counterparts in the transition region and the corona

  2. Proxy magnetometry of the photosphere: why are G-band bright points so bright?

    NARCIS (Netherlands)

    Rutten, R.J.; Kiselman, Dan; Voort, Luc Rouppe van der; Plez, Bertrand

    2000-01-01

    We discuss the formation of G-band bright points in terms of standard uxtube modeling, in particular the 1D LTE models constructed by Solanki and coworkers. Combined with LTE spectral synthesis they explain observed G-band bright point contrasts quite well. The G-band contrast increase over the

  3. Bright point study. [of solar corona

    Science.gov (United States)

    Tang, F.; Harvey, K.; Bruner, M.; Kent, B.; Antonucci, E.

    1982-01-01

    Transition region and coronal observations of bright points by instruments aboard the Solar Maximum Mission and high resolution photospheric magnetograph observations on September 11, 1980 are presented. A total of 31 bipolar ephemeral regions were found in the photosphere from birth in 9.3 hours of combined magnetograph observations from three observatories. Two of the three ephemeral regions present in the field of view of the Ultraviolet Spectrometer-Polarimeter were observed in the C IV 1548 line. The unobserved ephemeral region was determined to be the shortest-lived (2.5 hr) and lowest in magnetic flux density (13G) of the three regions. The Flat Crystal Spectrometer observed only low level signals in the O VIII 18.969 A line, which were not statistically significant to be positively identified with any of the 16 ephemeral regions detected in the photosphere. In addition, the data indicate that at any given time there lacked a one-to-one correspondence between observable bright points and photospheric ephemeral regions, while more ephemeral regions were observed than their counterparts in the transition region and the corona.

  4. High-brightness injector modeling

    International Nuclear Information System (INIS)

    Lewellen, J.W.

    2004-01-01

    There are many aspects to the successful conception, design, fabrication, and operation of high-brightness electron beam sources. Accurate and efficient modeling of the injector are critical to all phases of the process, from evaluating initial ideas to successful diagnosis of problems during routine operation. The basic modeling tasks will vary from design to design, according to the basic nature of the injector (dc, rf, hybrid, etc.), the type of cathode used (thermionic, photo, field emitter, etc.), and 'macro' factors such as average beam current and duty factor, as well as the usual list of desired beam properties. The injector designer must be at least aware of, if not proficient at addressing, the multitude of issues that arise from these considerations; and, as high-brightness injectors continue to move out of the laboratory, the number of such issues will continue to expand.

  5. Differential Rotation via Tracking of Coronal Bright Points.

    Science.gov (United States)

    McAteer, James; Boucheron, Laura E.; Osorno, Marcy

    2016-05-01

    The accurate computation of solar differential rotation is important both as a constraint for, and evidence towards, support of models of the solar dynamo. As such, the use of Xray and Extreme Ultraviolet bright points to elucidate differential rotation has been studied in recent years. In this work, we propose the automated detection and tracking of coronal bright points (CBPs) in a large set of SDO data for re-evaluation of solar differential rotation and comparison to other results. The big data aspects, and high cadence, of SDO data mitigate a few issues common to detection and tracking of objects in image sequences and allow us to focus on the use of CBPs to determine differential rotation. The high cadence of the data allows to disambiguate individual CBPs between subsequent images by allowing for significant spatial overlap, i.e., by the fact that the CBPs will rotate a short distance relative to their size. The significant spatial overlap minimizes the effects of incorrectly detected CBPs by reducing the occurrence of outlier values of differential rotation. The big data aspects of the data allows to be more conservative in our detection of CBPs (i.e., to err on the side of missing CBPs rather than detecting extraneous CBPs) while still maintaining statistically larger populations over which to study characteristics. The ability to compute solar differential rotation through the automated detection and tracking of a large population of CBPs will allow for further analyses such as the N-S asymmetry of differential rotation, variation of differential rotation over the solar cycle, and a detailed study of the magnetic flux underlying the CBPs.

  6. Characterizing the Motion of Solar Magnetic Bright Points at High Resolution

    Science.gov (United States)

    Van Kooten, Samuel J.; Cranmer, Steven R.

    2017-11-01

    Magnetic bright points in the solar photosphere, visible in both continuum and G-band images, indicate footpoints of kilogauss magnetic flux tubes extending to the corona. The power spectrum of bright-point motion is thus also the power spectrum of Alfvén wave excitation, transporting energy up flux tubes into the corona. This spectrum is a key input in coronal and heliospheric models. We produce a power spectrum of bright-point motion using radiative magnetohydrodynamic simulations, exploiting spatial resolution higher than can be obtained in present-day observations, while using automated tracking to produce large data quantities. We find slightly higher amounts of power at all frequencies compared to observation-based spectra, while confirming the spectrum shape of recent observations. This also provides a prediction for observations of bright points with DKIST, which will achieve similar resolution and high sensitivity. We also find a granule size distribution in support of an observed two-population distribution, and we present results from tracking passive tracers, which show a similar power spectrum to that of bright points. Finally, we introduce a simplified, laminar model of granulation, with which we explore the roles of turbulence and of the properties of the granulation pattern in determining bright-point motion.

  7. Micro Coronal Bright Points Observed in the Quiet Magnetic Network by SOHO/EIT

    Science.gov (United States)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.

    1997-01-01

    When one looks at SOHO/EIT Fe XII images of quiet regions, one can see the conventional coronal bright points (> 10 arcsec in diameter), but one will also notice many smaller faint enhancements in brightness (Figure 1). Do these micro coronal bright points belong to the same family as the conventional bright points? To investigate this question we compared SOHO/EIT Fe XII images with Kitt Peak magnetograms to determine whether the micro bright points are in the magnetic network and mark magnetic bipoles within the network. To identify the coronal bright points, we applied a picture frame filter to the Fe XII images; this brings out the Fe XII network and bright points (Figure 2) and allows us to study the bright points down to the resolution limit of the SOHO/EIT instrument. This picture frame filter is a square smoothing function (hlargelyalf a network cell wide) with a central square (quarter of a network cell wide) removed so that a bright point's intensity does not effect its own background. This smoothing function is applied to the full disk image. Then we divide the original image by the smoothed image to obtain our filtered image. A bright point is defined as any contiguous set of pixels (including diagonally) which have enhancements of 30% or more above the background; a micro bright point is any bright point 16 pixels or smaller in size. We then analyzed the bright points that were fully within quiet regions (0.6 x 0.6 solar radius) centered on disk center on six different days.

  8. On the Relation Between Facular Bright Points and the Magnetic Field

    Science.gov (United States)

    Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran

    1994-12-01

    Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross

  9. Size dependence of upconversion photoluminescence in MPA capped CdTe quantum dots: Existence of upconversion bright point

    International Nuclear Information System (INIS)

    Ananthakumar, S.; Jayabalan, J.; Singh, Asha; Khan, Salahuddin; Babu, S. Moorthy; Chari, Rama

    2016-01-01

    The photoluminescence (PL) from semiconductor quantum dots can show a “PL bright point”, that is the PL from as prepared quantum dots is maximum at a particular size. In this work we show that, for CdTe quantum dots, upconversion photoluminescence (UCPL) originating from nonlinear absorption shows a similar “UCPL bright point”. The PL and UCPL bright points occur at nearly the same size. The existence of a UCPL bright point has important implications for upconversion microscopy applications. - Highlights: • The size dependence of the upconversion photoluminescence (UCPL) spectrum of CdTe quantum dots has been reported. • We show that the UCPL from the CdTe quantum dots is highest at a particular size. • Thus the occurrence of a "UCPL bright point" in CdTe quantum dots has been demonstrated. • It has been shown that the UCPL bright point occurs at nearly the same size as a normal bright point.

  10. Orientation of coronal bright points and small-scale magnetic bipoles

    International Nuclear Information System (INIS)

    MINENKO, E.P.; SHERDANOV, CH.T.; SATTAROV, I.; KARACHIK, N.V.

    2014-01-01

    Using the observations from Extreme-Ultraviolet Imaging Telescope (EIT) on the SOHO board and longitudinal full-disk magnetograms (vector spectromagnetograph - VSM) from the Synoptic Optical Long-Term Investigations of the Sun (SOLIS), we explore the orientation and relationship between the coronal bright points at 195 A o (hereafter CBPs) and magnetic bipoles (only for the central zone of solar disk). The magnetic bipoles are identified as a pair of streams of positive and negative polarities with a shortest distance between them. This paper presents a study of the structure and orientation (angles) of magnetic bipoles to the solar equator and two types of CBPs: 'dim' CBPs in the quiet regions of the Sun and 'bright' CBPs associated with active regions. For these magnetic bipoles associated with 'bright' CBPs, we find that their orientation angles are distributed randomly along the equator. (authors)

  11. Bright points and ejections observed on the sun by the KORONAS-FOTON instrument TESIS

    Science.gov (United States)

    Ulyanov, A. S.; Bogachev, S. A.; Kuzin, S. V.

    2010-10-01

    Five-second observations of the solar corona carried out in the FeIX 171 Å line by the KORONAS-FOTON instrument TESIS are used to study the dynamics of small-scale coronal structures emitting in and around coronal bright points. The small-scale structures of the lower corona display complex dynamics similar to those of magnetic loops located at higher levels of the solar corona. Numerous detected oscillating structures with sizes below 10 000 km display oscillation periods from 50 to 350 s. The period distributions of these structures are different for P 150 s, which implies that different oscillation modes are excited at different periods. The small-scale structures generate numerous flare-like events with energies 1024-1026 erg (nanoflares) and with a spatial density of one event per arcsecond or more observed over an area of 4 × 1011 km2. Nanoflares are not associated with coronal bright points, and almost uniformly cover the solar disk in the observation region. The ejections of solar material from the coronal bright points demonstrate velocities of 80-110 km/s.

  12. Synchronized observations of bright points from the solar photosphere to the corona

    Science.gov (United States)

    Tavabi, Ehsan

    2018-05-01

    One of the most important features in the solar atmosphere is the magnetic network and its relationship to the transition region (TR) and coronal brightness. It is important to understand how energy is transported into the corona and how it travels along the magnetic field lines between the deep photosphere and chromosphere through the TR and corona. An excellent proxy for transportation is the Interface Region Imaging Spectrograph (IRIS) raster scans and imaging observations in near-ultraviolet (NUV) and far-ultraviolet (FUV) emission channels, which have high time, spectral and spatial resolutions. In this study, we focus on the quiet Sun as observed with IRIS. The data with a high signal-to-noise ratio in the Si IV, C II and Mg II k lines and with strong emission intensities show a high correlation with TR bright network points. The results of the IRIS intensity maps and dopplergrams are compared with those of the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) instruments onboard the Solar Dynamical Observatory (SDO). The average network intensity profiles show a strong correlation with AIA coronal channels. Furthermore, we applied simultaneous observations of the magnetic network from HMI and found a strong relationship between the network bright points in all levels of the solar atmosphere. These features in the network elements exhibited regions of high Doppler velocity and strong magnetic signatures. Plenty of corona bright points emission, accompanied by the magnetic origins in the photosphere, suggest that magnetic field concentrations in the network rosettes could help to couple the inner and outer solar atmosphere.

  13. Size dependence of upconversion photoluminescence in MPA capped CdTe quantum dots: Existence of upconversion bright point

    Energy Technology Data Exchange (ETDEWEB)

    Ananthakumar, S. [Crystal Growth Centre, Anna University, Chennai 600025 (India); Jayabalan, J., E-mail: jjaya@rrcat.gov.in [Laser Physics Applications Section, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Singh, Asha; Khan, Salahuddin [Laser Physics Applications Section, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Babu, S. Moorthy [Crystal Growth Centre, Anna University, Chennai 600025 (India); Chari, Rama [Laser Physics Applications Section, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India)

    2016-01-15

    The photoluminescence (PL) from semiconductor quantum dots can show a “PL bright point”, that is the PL from as prepared quantum dots is maximum at a particular size. In this work we show that, for CdTe quantum dots, upconversion photoluminescence (UCPL) originating from nonlinear absorption shows a similar “UCPL bright point”. The PL and UCPL bright points occur at nearly the same size. The existence of a UCPL bright point has important implications for upconversion microscopy applications. - Highlights: • The size dependence of the upconversion photoluminescence (UCPL) spectrum of CdTe quantum dots has been reported. • We show that the UCPL from the CdTe quantum dots is highest at a particular size. • Thus the occurrence of a 'UCPL bright point' in CdTe quantum dots has been demonstrated. • It has been shown that the UCPL bright point occurs at nearly the same size as a normal bright point.

  14. The formation and disintegration of magnetic bright points observed by sunrise/IMaX

    Energy Technology Data Exchange (ETDEWEB)

    Utz, D.; Del Toro Iniesta, J. C.; Bellot Rubio, L. R. [Instituto de Astrofísica de Andalucía (CSIC), Apdo. de Correos 3004, E-18080 Granada (Spain); Jurčák, J. [Astronomical Institute, Academy of Sciences of the Czech Republic, 251 65 Ondřejov (Czech Republic); Martínez Pillet, V. [Instituto de Astrofísica de Canarias, Vía Láctea, s/n, E-38200 La Laguna (Spain); Solanki, S. K. [Max-Planck Institut für Sonnensystemforschung, Max-Planck-Strasse, 2, D-37191 (Germany); Schmidt, W., E-mail: utz@iaa.es, E-mail: dominik.utz@uni-graz.at [Kiepenheuer-Institut für Sonnenphysik, Schöneckstrasse 6, D-79104 Freiburg (Germany)

    2014-12-01

    The evolution of the physical parameters of magnetic bright points (MBPs) located in the quiet Sun (mainly in the interwork) during their lifetime is studied. First, we concentrate on the detailed description of the magnetic field evolution of three MBPs. This reveals that individual features follow different, generally complex, and rather dynamic scenarios of evolution. Next, we apply statistical methods on roughly 200 observed MBP evolutionary tracks. MBPs are found to be formed by the strengthening of an equipartition field patch, which initially exhibits a moderate downflow. During the evolution, strong downdrafts with an average velocity of 2.4 km s{sup –1} set in. These flows, taken together with the concurrent strengthening of the field, suggest that we are witnessing the occurrence of convective collapses in these features, although only 30% of them reach kG field strengths. This fraction might turn out to be larger when the new 4 m class solar telescopes are operational as observations of MBPs with current state of the art instrumentation could still be suffering from resolution limitations. Finally, when the bright point disappears (although the magnetic field often continues to exist) the magnetic field strength has dropped to the equipartition level and is generally somewhat weaker than at the beginning of the MBP's evolution. Also, only relatively weak downflows are found on average at this stage of the evolution. Only 16% of the features display upflows at the time that the field weakens, or the MBP disappears. This speaks either for a very fast evolving dynamic process at the end of the lifetime, which could not be temporally resolved, or against strong upflows as the cause of the weakening of the field of these magnetic elements, as has been proposed based on simulation results. It is noteworthy that in about 10% of the cases, we observe in the vicinity of the downflows small-scale strong (exceeding 2 km s{sup –1}) intergranular upflows

  15. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    Science.gov (United States)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  16. Point kinetics modeling

    International Nuclear Information System (INIS)

    Kimpland, R.H.

    1996-01-01

    A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented

  17. SMOS brightness temperature assimilation into the Community Land Model

    Directory of Open Access Journals (Sweden)

    D. Rains

    2017-11-01

    Full Text Available SMOS (Soil Moisture and Ocean Salinity mission brightness temperatures at a single incident angle are assimilated into the Community Land Model (CLM across Australia to improve soil moisture simulations. Therefore, the data assimilation system DasPy is coupled to the local ensemble transform Kalman filter (LETKF as well as to the Community Microwave Emission Model (CMEM. Brightness temperature climatologies are precomputed to enable the assimilation of brightness temperature anomalies, making use of 6 years of SMOS data (2010–2015. Mean correlation R with in situ measurements increases moderately from 0.61 to 0.68 (11 % for upper soil layers if the root zone is included in the updates. A reduced improvement of 5 % is achieved if the assimilation is restricted to the upper soil layers. Root-zone simulations improve by 7 % when updating both the top layers and root zone, and by 4 % when only updating the top layers. Mean increments and increment standard deviations are compared for the experiments. The long-term assimilation impact is analysed by looking at a set of quantiles computed for soil moisture at each grid cell. Within hydrological monitoring systems, extreme dry or wet conditions are often defined via their relative occurrence, adding great importance to assimilation-induced quantile changes. Although still being limited now, longer L-band radiometer time series will become available and make model output improved by assimilating such data that are more usable for extreme event statistics.

  18. Modeling laser brightness from cross Porro prism resonators

    Science.gov (United States)

    Forbes, Andrew; Burger, Liesl; Litvin, Igor Anatolievich

    2006-08-01

    Laser brightness is a parameter often used to compare high power laser beam delivery from various sources, and incorporates both the power contained in the particular mode, as well as the propagation of that mode through the beam quality factor, M2. In this study a cross Porro prism resonator is considered; crossed Porro prism resonators have been known for some time, but until recently have not been modeled as a complete physical optics system that allows the modal output to be determined as a function of the rotation angle of the prisms. In this paper we consider the diffraction losses as a function of the prism rotation angle relative to one another, and combine this with the propagation of the specific modes to determine the laser output brightness as a function of the prism orientation.

  19. The First ALMA Observation of a Solar Plasmoid Ejection from an X-Ray Bright Point

    Science.gov (United States)

    Shimojo, M.; Hudson, H. S.; White, S. M.; Bastian, T.; Iwai, K.

    2017-12-01

    Eruptive phenomena are important features of energy releases events, such solar flares, and have the potential to improve our understanding of the dynamics of the solar atmosphere. The 304 A EUV line of helium, formed at around 10^5 K, is found to be a reliable tracer of such phenomena, but the determination of physical parameters from such observations is not straightforward. We have observed a plasmoid ejection from an X-ray bright point simultaneously with ALMA, SDO/AIA, and Hinode/XRT. This paper reports the physical parameters of the plasmoid obtained by combining the radio, EUV, and X-ray data. As a result, we conclude that the plasmoid can consist either of (approximately) isothermal ˜10^5 K plasma that is optically thin at 100 GHz, or a ˜10^4 K core with a hot envelope. The analysis demonstrates the value of the additional temperature and density constraints that ALMA provides, and future science observations with ALMA will be able to match the spatial resolution of space-borne and other high-resolution telescopes.

  20. The First ALMA Observation of a Solar Plasmoid Ejection from an X-Ray Bright Point

    Energy Technology Data Exchange (ETDEWEB)

    Shimojo, Masumi [National Astronomical Observatory of Japan, Tokyo, 181-8588 (Japan); Hudson, Hugh S. [School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ (United Kingdom); White, Stephen M. [Space Vehicles Directorate, Air Force Research Laboratory, Kirtland AFB, NM 87117-5776 (United States); Bastian, Timothy S. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Iwai, Kazumasa, E-mail: masumi.shimojo@nao.ac.jp [Institute for Space-Earth Environmental Research, Nagoya University, Nagoya, 464-8601 (Japan)

    2017-05-20

    Eruptive phenomena such as plasmoid ejections or jets are important features of solar activity and have the potential to improve our understanding of the dynamics of the solar atmosphere. Such ejections are often thought to be signatures of the outflows expected in regions of fast magnetic reconnection. The 304 Å EUV line of helium, formed at around 10{sup 5} K, is found to be a reliable tracer of such phenomena, but the determination of physical parameters from such observations is not straightforward. We have observed a plasmoid ejection from an X-ray bright point simultaneously at millimeter wavelengths with ALMA, at EUV wavelengths with SDO /AIA, and in soft X-rays with Hinode /XRT. This paper reports the physical parameters of the plasmoid obtained by combining the radio, EUV, and X-ray data. As a result, we conclude that the plasmoid can consist either of (approximately) isothermal ∼10{sup 5} K plasma that is optically thin at 100 GHz, or a ∼10{sup 4} K core with a hot envelope. The analysis demonstrates the value of the additional temperature and density constraints that ALMA provides, and future science observations with ALMA will be able to match the spatial resolution of space-borne and other high-resolution telescopes.

  1. Comparison of solar photospheric bright points between Sunrise observations and MHD simulations

    Science.gov (United States)

    Riethmüller, T. L.; Solanki, S. K.; Berdyugina, S. V.; Schüssler, M.; Martínez Pillet, V.; Feller, A.; Gandorfer, A.; Hirzberger, J.

    2014-08-01

    Bright points (BPs) in the solar photosphere are thought to be the radiative signatures (small-scale brightness enhancements) of magnetic elements described by slender flux tubes or sheets located in the darker intergranular lanes in the solar photosphere. They contribute to the ultraviolet (UV) flux variations over the solar cycle and hence may play a role in influencing the Earth's climate. Here we aim to obtain a better insight into their properties by combining high-resolution UV and spectro-polarimetric observations of BPs by the Sunrise Observatory with 3D compressible radiation magnetohydrodynamical (MHD) simulations. To this end, full spectral line syntheses are performed with the MHD data and a careful degradation is applied to take into account all relevant instrumental effects of the observations. In a first step it is demonstrated that the selected MHD simulations reproduce the measured distributions of intensity at multiple wavelengths, line-of-sight velocity, spectral line width, and polarization degree rather well. The simulated line width also displays the correct mean, but a scatter that is too small. In the second step, the properties of observed BPs are compared with synthetic ones. Again, these are found to match relatively well, except that the observations display a tail of large BPs with strong polarization signals (most likely network elements) not found in the simulations, possibly due to the small size of the simulation box. The higher spatial resolution of the simulations has a significant effect, leading to smaller and more numerous BPs. The observation that most BPs are weakly polarized is explained mainly by the spatial degradation, the stray light contamination, and the temperature sensitivity of the Fe i line at 5250.2 Å. Finally, given that the MHD simulations are highly consistent with the observations, we used the simulations to explore the properties of BPs further. The Stokes V asymmetries increase with the distance to the

  2. Subarcsecond bright points and quasi-periodic upflows below a quiescent filament observed by IRIS

    Science.gov (United States)

    Li, T.; Zhang, J.

    2016-05-01

    Context. The new Interface Region Imaging Spectrograph (IRIS) mission provides high-resolution observations of UV spectra and slit-jaw images (SJIs). These data have become available for investigating the dynamic features in the transition region (TR) below the on-disk filaments. Aims: The driver of "counter-streaming" flows along the filament spine is still unknown yet. The magnetic structures and the upflows at the footpoints of the filaments and their relations with the filament mainbody have not been well understood. We study the dynamic evolution at the footpoints of filaments in order to find some clues for solving these questions. Methods: Using UV spectra and SJIs from the IRIS, along with coronal images and magnetograms from the Solar Dynamics Observatory (SDO), we present the new features in a quiescent filament channel: subarcsecond bright points (BPs) and quasi-periodic upflows. Results: The BPs in the TR have a spatial scale of about 350-580 km and lifetimes of more than several tens of minutes. They are located at stronger magnetic structures in the filament channel with a magnetic flux of about 1017-1018 Mx. Quasi-periodic brightenings and upflows are observed in the BPs, and the period is about 4-5 min. The BP and the associated jet-like upflow comprise a "tadpole-shaped" structure. The upflows move along bright filament threads, and their directions are almost parallel to the spine of the filament. The upflows initiated from the BPs with opposite polarity magnetic fields have opposite directions. The velocity of the upflows in the plane of sky is about 5-50 km s-1. The emission line of Si IV 1402.77 Å at the locations of upflows exhibits obvious blueshifts of about 5-30 km s-1, and the line profile is broadened with the width of more than 20 km s-1. Conclusions: The BPs seem to be the bases of filament threads, and the upflows are able to convey mass for the dynamic balance of the filament. The "counter-streaming" flows in previous observations

  3. Dynamics of isolated magnetic bright points derived from Hinode/SOT G-band observations

    Science.gov (United States)

    Utz, D.; Hanslmeier, A.; Muller, R.; Veronig, A.; Rybák, J.; Muthsam, H.

    2010-02-01

    Context. Small-scale magnetic fields in the solar photosphere can be identified in high-resolution magnetograms or in the G-band as magnetic bright points (MBPs). Rapid motions of these fields can cause magneto-hydrodynamical waves and can also lead to nanoflares by magnetic field braiding and twisting. The MBP velocity distribution is a crucial parameter for estimating the amplitudes of those waves and the amount of energy they can contribute to coronal heating. Aims: The velocity and lifetime distributions of MBPs are derived from solar G-band images of a quiet sun region acquired by the Hinode/SOT instrument with different temporal and spatial sampling rates. Methods: We developed an automatic segmentation, identification and tracking algorithm to analyse G-Band image sequences to obtain the lifetime and velocity distributions of MBPs. The influence of temporal/spatial sampling rates on these distributions is studied and used to correct the obtained lifetimes and velocity distributions for these digitalisation effects. Results: After the correction of algorithm effects, we obtained a mean MBP lifetime of (2.50 ± 0.05) min and mean MBP velocities, depending on smoothing processes, in the range of (1-2) km~s-1. Corrected for temporal sampling effects, we obtained for the effective velocity distribution a Rayleigh function with a coefficient of (1.62 ± 0.05) km~s-1. The x- and y-components of the velocity distributions are Gaussians. The lifetime distribution can be fitted by an exponential function.

  4. Advanced prior modeling for 3D bright field electron tomography

    Science.gov (United States)

    Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.

    2015-03-01

    Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.

  5. Properties of bright solitons in averaged and unaveraged models for SDG fibres

    Science.gov (United States)

    Kumar, Ajit; Kumar, Atul

    1996-04-01

    Using the slowly varying envelope approximation and averaging over the fibre cross-section the evolution equation for optical pulses in semiconductor-doped glass (SDG) fibres is derived from the nonlinear wave equation. Bright soliton solutions of this equation are obtained numerically and their properties are studied and compared with those of the bright solitons in the unaveraged model.

  6. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  7. Arctic sea ice signatures: L-band brightness temperature sensitivity comparison using two radiation transfer models

    Directory of Open Access Journals (Sweden)

    F. Richter

    2018-03-01

    Full Text Available Sea ice is a crucial component for short-, medium- and long-term numerical weather predictions. Most importantly, changes of sea ice coverage and areas covered by thin sea ice have a large impact on heat fluxes between the ocean and the atmosphere. L-band brightness temperatures from ESA's Earth Explorer SMOS (Soil Moisture and Ocean Salinity have been proven to be a valuable tool to derive thin sea ice thickness. These retrieved estimates were already successfully assimilated in forecasting models to constrain the ice analysis, leading to more accurate initial conditions and subsequently more accurate forecasts. However, the brightness temperature measurements can potentially be assimilated directly in forecasting systems, reducing the data latency and providing a more consistent first guess. As a first step towards such a data assimilation system we studied the forward operator that translates geophysical parameters provided by a model into brightness temperatures. We use two different radiative transfer models to generate top of atmosphere brightness temperatures based on ORAP5 model output for the 2012/2013 winter season. The simulations are then compared against actual SMOS measurements. The results indicate that both models are able to capture the general variability of measured brightness temperatures over sea ice. The simulated brightness temperatures are dominated by sea ice coverage and thickness changes are most pronounced in the marginal ice zone where new sea ice is formed. There we observe the largest differences of more than 20 K over sea ice between simulated and observed brightness temperatures. We conclude that the assimilation of SMOS brightness temperatures yields high potential for forecasting models to correct for uncertainties in thin sea ice areas and suggest that information on sea ice fractional coverage from higher-frequency brightness temperatures should be used simultaneously.

  8. Arctic sea ice signatures: L-band brightness temperature sensitivity comparison using two radiation transfer models

    Science.gov (United States)

    Richter, Friedrich; Drusch, Matthias; Kaleschke, Lars; Maaß, Nina; Tian-Kunze, Xiangshan; Mecklenburg, Susanne

    2018-03-01

    Sea ice is a crucial component for short-, medium- and long-term numerical weather predictions. Most importantly, changes of sea ice coverage and areas covered by thin sea ice have a large impact on heat fluxes between the ocean and the atmosphere. L-band brightness temperatures from ESA's Earth Explorer SMOS (Soil Moisture and Ocean Salinity) have been proven to be a valuable tool to derive thin sea ice thickness. These retrieved estimates were already successfully assimilated in forecasting models to constrain the ice analysis, leading to more accurate initial conditions and subsequently more accurate forecasts. However, the brightness temperature measurements can potentially be assimilated directly in forecasting systems, reducing the data latency and providing a more consistent first guess. As a first step towards such a data assimilation system we studied the forward operator that translates geophysical parameters provided by a model into brightness temperatures. We use two different radiative transfer models to generate top of atmosphere brightness temperatures based on ORAP5 model output for the 2012/2013 winter season. The simulations are then compared against actual SMOS measurements. The results indicate that both models are able to capture the general variability of measured brightness temperatures over sea ice. The simulated brightness temperatures are dominated by sea ice coverage and thickness changes are most pronounced in the marginal ice zone where new sea ice is formed. There we observe the largest differences of more than 20 K over sea ice between simulated and observed brightness temperatures. We conclude that the assimilation of SMOS brightness temperatures yields high potential for forecasting models to correct for uncertainties in thin sea ice areas and suggest that information on sea ice fractional coverage from higher-frequency brightness temperatures should be used simultaneously.

  9. Smooth random change point models.

    Science.gov (United States)

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Injector modeling and achievement/maintenance of high brightness

    International Nuclear Information System (INIS)

    Boyd, J.K.

    1985-10-01

    Viewgraphs for the workshop presentation are given. The presentation has three fundamental parts. In part one the need for numerical calculations is justified and the available computer codes are enumerated. The capabilities and features of the DPC computer code are the focal point in this section. In part two the injector design issues are discussed. These issues include such things as the beam optics and magnetic field profile. In part three the experimental results of two injector designs are compared with DPC predictions. 8 figs

  11. Evaluation of brightness temperature from a forward model of ...

    Indian Academy of Sciences (India)

    profile the temperature and humidity at high temporal and vertical resolution in the lower troposphere. The process of ... structure of the atmosphere in numerical weather prediction models. ..... quency channels that can be used in building.

  12. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    Science.gov (United States)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the

  13. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  14. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  15. A model for atmospheric brightness temperatures observed by the special sensor microwave imager (SSM/I)

    Science.gov (United States)

    Petty, Grant W.; Katsaros, Kristina B.

    1989-01-01

    A closed-form mathematical model for the atmospheric contribution to microwave the absorption and emission at the SSM/I frequencies is developed in order to improve quantitative interpretation of microwave imagery from the Special Sensor Microwave Imager (SSM/I). The model is intended to accurately predict upwelling and downwelling atmospheric brightness temperatures at SSM/I frequencies, as functions of eight input parameters: the zenith (nadir) angle, the integrated water vapor and vapor scale height, the integrated cloud water and cloud height, the effective surface temperature, atmospheric lapse rate, and surface pressure. It is shown that the model accurately reproduces clear-sky brightness temperatures computed by explicit integration of a large number of radiosonde soundings representing all maritime climate zones and seasons.

  16. Modeling of Diamond Field-Emitter-Arrays for high brightness photocathode applications

    Science.gov (United States)

    Kwan, Thomas; Huang, Chengkun; Piryatinski, Andrei; Lewellen, John; Nichols, Kimberly; Choi, Bo; Pavlenko, Vitaly; Shchegolkov, Dmitry; Nguyen, Dinh; Andrews, Heather; Simakov, Evgenya

    2017-10-01

    We propose to employ Diamond Field-Emitter-Arrays (DFEAs) as high-current-density ultra-low-emittance photocathodes for compact laser-driven dielectric accelerators capable of generating ultra-high brightness electron beams for advanced applications. We develop a semi-classical Monte-Carlo photoemission model for DFEAs that includes carriers' transport to the emitter surface and tunneling through the surface under external fields. The model accounts for the electronic structure size quantization affecting the transport and tunneling process within the sharp diamond tips. We compare this first principle model with other field emission models, such as the Child-Langmuir and Murphy-Good models. By further including effects of carrier photoexcitation, we perform simulations of the DFEAs' photoemission quantum yield and the emitted electron beam. Details of the theoretical model and validation against preliminary experimental data will be presented. Work ssupported by LDRD program at LANL.

  17. SIMULTANEOUS MULTI-BAND DETECTION OF LOW SURFACE BRIGHTNESS GALAXIES WITH MARKOVIAN MODELING

    International Nuclear Information System (INIS)

    Vollmer, B.; Bonnarel, F.; Louys, M.; Perret, B.; Petremand, M.; Lavigne, F.; Collet, Ch.; Van Driel, W.; Sabatini, S.; MacArthur, L. A.

    2013-01-01

    We present to the astronomical community an algorithm for the detection of low surface brightness (LSB) galaxies in images, called MARSIAA (MARkovian Software for Image Analysis in Astronomy), which is based on multi-scale Markovian modeling. MARSIAA can be applied simultaneously to different bands. It segments an image into a user-defined number of classes, according to their surface brightness and surroundings—typically, one or two classes contain the LSB structures. We have developed an algorithm, called DetectLSB, which allows the efficient identification of LSB galaxies from among the candidate sources selected by MARSIAA. The application of the method to two and three bands simultaneously was tested on simulated images. Based on our tests, we are confident that we can detect LSB galaxies down to a central surface brightness level of only 1.5 times the standard deviation from the mean pixel value in the image background. To assess the robustness of our method, the method was applied to a set of 18 B- and I-band images (covering 1.3 deg 2 in total) of the Virgo Cluster to which Sabatini et al. previously applied a matched-filter dwarf LSB galaxy search algorithm. We have detected all 20 objects from the Sabatini et al. catalog which we could classify by eye as bona fide LSB galaxies. Our method has also detected four additional Virgo Cluster LSB galaxy candidates undetected by Sabatini et al. To further assess the completeness of the results of our method, both MARSIAA, SExtractor, and DetectLSB were applied to search for (1) mock Virgo LSB galaxies inserted into a set of deep Next Generation Virgo Survey (NGVS) gri-band subimages and (2) Virgo LSB galaxies identified by eye in a full set of NGVS square degree gri images. MARSIAA/DetectLSB recovered ∼20% more mock LSB galaxies and ∼40% more LSB galaxies identified by eye than SExtractor/DetectLSB. With a 90% fraction of false positives from an entirely unsupervised pipeline, a completeness of 90% is

  18. THE STABILITY OF LOW SURFACE BRIGHTNESS DISKS BASED ON MULTI-WAVELENGTH MODELING

    International Nuclear Information System (INIS)

    MacLachlan, J. M.; Wood, K.; Matthews, L. D.; Gallagher, J. S.

    2011-01-01

    To investigate the structure and composition of the dusty interstellar medium (ISM) of low surface brightness (LSB) disk galaxies, we have used multi-wavelength photometry to construct spectral energy distributions for three low-mass, edge-on LSB galaxies (V rot = 88-105 km s -1 ). We use Monte Carlo radiation transfer codes that include the effects of transiently heated small grains and polycyclic aromatic hydrocarbon molecules to model and interpret the data. We find that, unlike the high surface brightness galaxies previously modeled, the dust disks appear to have scale heights equal to or exceeding their stellar scale heights. This result supports the findings of previous studies that low-mass disk galaxies have dust scale heights comparable to their stellar scale heights and suggests that the cold ISM of low-mass, LSB disk galaxies may be stable against fragmentation and gravitational collapse. This may help to explain the lack of observed dust lanes in edge-on LSB galaxies and their low current star formation rates. Dust masses are found in the range (1.16-2.38) x 10 6 M sun , corresponding to face-on (edge-on), V-band, optical depths 0.034 ∼ face ∼ eq ∼< 1.99).

  19. MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    F. Poux

    2017-10-01

    Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  20. Model for Semantically Rich Point Cloud Data

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  1. An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans

    Directory of Open Access Journals (Sweden)

    Elizaveta Zabolotskikh

    2014-03-01

    Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.

  2. SIMULTANEOUS MULTI-BAND DETECTION OF LOW SURFACE BRIGHTNESS GALAXIES WITH MARKOVIAN MODELING

    Energy Technology Data Exchange (ETDEWEB)

    Vollmer, B.; Bonnarel, F.; Louys, M. [CDS, Observatoire Astronomique, UMR 7550, 11 rue de l' universite, F-67000 Strasbourg (France); Perret, B.; Petremand, M.; Lavigne, F.; Collet, Ch. [LSIIT, Universite de Strasbourg, 7, Rue Rene Descartes, F-67084 Strasbourg (France); Van Driel, W. [GEPI, Observatoire de Paris, CNRS, Universite Paris Diderot, 5 place Jules Janssen, F-92190 Meudon (France); Sabatini, S. [INAF/IASF-Roma, via Fosso de Cavaliere 100, I-00133 Roma (Italy); MacArthur, L. A., E-mail: Bernd.Vollmer@astro.unistra.fr [Herzberg Institute of Astrophysics, National Research Council of Canada, Victoria, BC V9E 2E7 (Canada)

    2013-02-01

    We present to the astronomical community an algorithm for the detection of low surface brightness (LSB) galaxies in images, called MARSIAA (MARkovian Software for Image Analysis in Astronomy), which is based on multi-scale Markovian modeling. MARSIAA can be applied simultaneously to different bands. It segments an image into a user-defined number of classes, according to their surface brightness and surroundings-typically, one or two classes contain the LSB structures. We have developed an algorithm, called DetectLSB, which allows the efficient identification of LSB galaxies from among the candidate sources selected by MARSIAA. The application of the method to two and three bands simultaneously was tested on simulated images. Based on our tests, we are confident that we can detect LSB galaxies down to a central surface brightness level of only 1.5 times the standard deviation from the mean pixel value in the image background. To assess the robustness of our method, the method was applied to a set of 18 B- and I-band images (covering 1.3 deg{sup 2} in total) of the Virgo Cluster to which Sabatini et al. previously applied a matched-filter dwarf LSB galaxy search algorithm. We have detected all 20 objects from the Sabatini et al. catalog which we could classify by eye as bona fide LSB galaxies. Our method has also detected four additional Virgo Cluster LSB galaxy candidates undetected by Sabatini et al. To further assess the completeness of the results of our method, both MARSIAA, SExtractor, and DetectLSB were applied to search for (1) mock Virgo LSB galaxies inserted into a set of deep Next Generation Virgo Survey (NGVS) gri-band subimages and (2) Virgo LSB galaxies identified by eye in a full set of NGVS square degree gri images. MARSIAA/DetectLSB recovered {approx}20% more mock LSB galaxies and {approx}40% more LSB galaxies identified by eye than SExtractor/DetectLSB. With a 90% fraction of false positives from an entirely unsupervised pipeline, a completeness of

  3. Physical Models of Layered Polar Firn Brightness Temperatures from 0.5 to 2 GHz

    Science.gov (United States)

    Tan, Shurun; Aksoy, Mustafa; Brogioni, Marco; Macelloni, Giovanni; Durand, Michael; Jezek, Kenneth C.; Wang, Tian-Lin; Tsang, Leung; Johnson, Joel T.; Drinkwater, Mark R.; hide

    2015-01-01

    We investigate physical effects influencing 0.5-2 GHz brightness temperatures of layered polar firn to support the Ultra Wide Band Software Defined Radiometer (UWBRAD) experiment to be conducted in Greenland and in Antarctica. We find that because ice particle grain sizes are very small compared to the 0.5-2 GHz wavelengths, volume scattering effects are small. Variations in firn density over cm- to m-length scales, however, cause significant effects. Both incoherent and coherent models are used to examine these effects. Incoherent models include a 'cloud model' that neglects any reflections internal to the ice sheet, and the DMRT-ML and MEMLS radiative transfer codes that are publicly available. The coherent model is based on the layered medium implementation of the fluctuation dissipation theorem for thermal microwave radiation from a medium having a nonuniform temperature. Density profiles are modeled using a stochastic approach, and model predictions are averaged over a large number of realizations to take into account an averaging over the radiometer footprint. Density profiles are described by combining a smooth average density profile with a spatially correlated random process to model density fluctuations. It is shown that coherent model results after ensemble averaging depend on the correlation lengths of the vertical density fluctuations. If the correlation length is moderate or long compared with the wavelength (approximately 0.6x longer or greater for Gaussian correlation function without regard for layer thinning due to compaction), coherent and incoherent model results are similar (within approximately 1 K). However, when the correlation length is short compared to the wavelength, coherent model results are significantly different from the incoherent model by several tens of kelvins. For a 10-cm correlation length, the differences are significant between 0.5 and 1.1 GHz, and less for 1.1-2 GHz. Model results are shown to be able to match the v

  4. Two-point model for divertor transport

    International Nuclear Information System (INIS)

    Galambos, J.D.; Peng, Y.K.M.

    1984-04-01

    Plasma transport along divertor field lines was investigated using a two-point model. This treatment requires considerably less effort to find solutions to the transport equations than previously used one-dimensional (1-D) models and is useful for studying general trends. It also can be a valuable tool for benchmarking more sophisticated models. The model was used to investigate the possibility of operating in the so-called high density, low temperature regime

  5. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....

  6. Su Lyncis, a Hard X-Ray Bright M Giant: Clues Point to a Large Hidden Population of Symbiotic Stars

    Science.gov (United States)

    Mukai, K.; Luna, G. J. M.; Cusumano, G.; Segreto, A.; Munari, U.; Sokoloski, J. L.; Lucy, A. B.; Nelson, T.; Nunez, N. E.

    2016-01-01

    Symbiotic star surveys have traditionally relied almost exclusively on low resolution optical spectroscopy. However, we can obtain a more reliable estimate of their total Galactic population by using all available signatures of the symbiotic phenomenon. Here we report the discovery of a hard X-ray source, 4PBC J0642.9+5528, in the Swift hard X-ray all-sky survey, and identify it with a poorly studied red giant, SU Lyn, using pointed Swift observations and ground-based optical spectroscopy. The X-ray spectrum, the optical to UV spectrum, and the rapid UV variability of SU Lyn are all consistent with our interpretation that it is a symbiotic star containing an accreting white dwarf. The symbiotic nature of SU Lyn went unnoticed until now, because it does not exhibit emission lines strong enough to be obvious in low resolution spectra. We argue that symbiotic stars without shell-burning have weak emission lines, and that the current lists of symbiotic stars are biased in favor of shell-burning systems. We conclude that the true population of symbiotic stars has been underestimated, potentially by a large factor.

  7. Absolute brightness modeling for improved measurement of electron temperature from soft x-rays on MST

    Science.gov (United States)

    Reusch, L. M.; Franz, P.; Goetz, J. A.; den Hartog, D. J.; Nornberg, M. D.; van Meter, P.

    2017-10-01

    The two-color soft x-ray tomography (SXT) diagnostic on MST is now capable of Te measurement down to 500 eV. The previous lower limit was 1 keV, due to the presence of SXR emission lines from Al sputtered from the MST wall. The two-color technique uses two filters of different thickness to form a coarse spectrometer to estimate the slope of the continuum x-ray spectrum, which depends on Te. The 1.6 - 2.0 keV Al emission lines were previously filtered out by using thick Be filters (400 µm and 800 µm), thus restricting the range of the SXT diagnostic to Te >= 1 keV. Absolute brightness modeling explicitly includes several sources of radiation in the analysis model, enabling the use of thinner filters and measurement of much lower Te. Models based on the atomic database and analysis structure (ADAS) agree very well with our experimental SXR measurements. We used ADAS to assess the effect of bremsstrahlung, recombination, dielectronic recombination, and line emission on the inferred Te. This assessment informed the choice of the optimum filter pair to extend the Te range of the SXT diagnostic. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences program under Award Numbers DE-FC02-05ER54814 and DE-SC0015474.

  8. The Effect of Brightness of Lamps Teaching Based on the 5E Model on Students' Academic Achievement and Attitudes

    Science.gov (United States)

    Guzel, Hatice

    2016-01-01

    The purpose of this research was to examine and compare the effect of teaching the brightness of lamps, which is a topic for grade 11 physics lesson, on student achievement and attitude according to the 5E model belonging to the constructivist learning theory and the traditional teaching method. The research was conducted on 62 11th grade students…

  9. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  10. Flat Knitting Loop Deformation Simulation Based on Interlacing Point Model

    Directory of Open Access Journals (Sweden)

    Jiang Gaoming

    2017-12-01

    Full Text Available In order to create realistic loop primitives suitable for the faster CAD of the flat-knitted fabric, we have performed research on the model of the loop as well as the variation of the loop surface. This paper proposes an interlacing point-based model for the loop center curve, and uses the cubic Bezier curve to fit the central curve of the regular loop, elongated loop, transfer loop, and irregular deformed loop. In this way, a general model for the central curve of the deformed loop is obtained. The obtained model is then utilized to perform texture mapping, texture interpolation, and brightness processing, simulating a clearly structured and lifelike deformed loop. The computer program LOOP is developed by using the algorithm. The deformed loop is simulated with different yarns, and the deformed loop is applied to design of a cable stitch, demonstrating feasibility of the proposed algorithm. This paper provides a loop primitive simulation method characterized by lifelikeness, yarn material variability, and deformation flexibility, and facilitates the loop-based fast computer-aided design (CAD of the knitted fabric.

  11. Comparison of sparse point distribution models

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus

    2010-01-01

    This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...

  12. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....

  13. Assimilation of SMOS Brightness Temperatures or Soil Moisture Retrievals into a Land Surface Model

    Science.gov (United States)

    De Lannoy, Gabrielle J. M.; Reichle, Rolf H.

    2016-01-01

    Three different data products from the Soil Moisture Ocean Salinity (SMOS) mission are assimilated separately into the Goddard Earth Observing System Model, version 5 (GEOS-5) to improve estimates of surface and root-zone soil moisture. The first product consists of multi-angle, dual-polarization brightness temperature (Tb) observations at the bottom of the atmosphere extracted from Level 1 data. The second product is a derived SMOS Tb product that mimics the data at a 40 degree incidence angle from the Soil Moisture Active Passive (SMAP) mission. The third product is the operational SMOS Level 2 surface soil moisture (SM) retrieval product. The assimilation system uses a spatially distributed ensemble Kalman filter (EnKF) with seasonally varying climatological bias mitigation for Tb assimilation, whereas a time-invariant cumulative density function matching is used for SM retrieval assimilation. All assimilation experiments improve the soil moisture estimates compared to model-only simulations in terms of unbiased root-mean-square differences and anomaly correlations during the period from 1 July 2010 to 1 May 2015 and for 187 sites across the US. Especially in areas where the satellite data are most sensitive to surface soil moisture, large skill improvements (e.g., an increase in the anomaly correlation by 0.1) are found in the surface soil moisture. The domain-average surface and root-zone skill metrics are similar among the various assimilation experiments, but large differences in skill are found locally. The observation-minus-forecast residuals and analysis increments reveal large differences in how the observations add value in the Tb and SM retrieval assimilation systems. The distinct patterns of these diagnostics in the two systems reflect observation and model errors patterns that are not well captured in the assigned EnKF error parameters. Consequently, a localized optimization of the EnKF error parameters is needed to further improve Tb or SM retrieval

  14. Zero-point energy in bag models

    International Nuclear Information System (INIS)

    Milton, K.A.

    1979-01-01

    The zero-point (Casimir) energy of free vector (gluon) fields confined to a spherical cavity (bag) is computed. With a suitable renormalization the result for eight gluons is E = + 0.51/a. This result is substantially larger than that for a spherical shell (where both interior and exterior modes are present), and so affects Johnson's model of the QCD vacuum. It is also smaller than, and of opposite sign to, the value used in bag model phenomenology, so it will have important implications there. 1 figure

  15. Decoupling Stimulus Duration from Brightness in Metacontrast Masking: Data and Models

    Science.gov (United States)

    Di Lollo, Vincent; Muhlenen, Adrian von; Enns, James T.; Bridgeman, Bruce

    2004-01-01

    A brief target that is visible when displayed alone can be rendered invisible by a trailing stimulus (metacontrast masking). It has been difficult to determine the temporal dynamics of masking to date because increments in stimulus duration have been invariably confounded with apparent brightness (Bloch's law). In the research reported here,…

  16. Spatial Stochastic Point Models for Reservoir Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Syversveen, Anne Randi

    1997-12-31

    The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.

  17. MATHEMATICAL MODELING OF AC ELECTRIC POINT MOTOR

    Directory of Open Access Journals (Sweden)

    S. YU. Buryak

    2014-03-01

    Full Text Available Purpose. In order to ensure reliability, security, and the most important the continuity of the transportation process, it is necessary to develop, implement, and then improve the automated methods of diagnostic mechanisms, devices and rail transport systems. Only systems that operate in real time mode and transmit data on the instantaneous state of the control objects can timely detect any faults and thus provide additional time for their correction by railway employees. Turnouts are one of the most important and responsible components, and therefore require the development and implementation of such diagnostics system.Methodology. Achieving the goal of monitoring and control of railway automation objects in real time is possible only with the use of an automated process of the objects state diagnosing. For this we need to know the diagnostic features of a control object, which determine its state at any given time. The most rational way of remote diagnostics is the shape and current spectrum analysis that flows in the power circuits of railway automatics. Turnouts include electric motors, which are powered by electric circuits, and the shape of the current curve depends on both the condition of the electric motor, and the conditions of the turnout maintenance. Findings. For the research and analysis of AC electric point motor it was developed its mathematical model. The calculation of parameters and interdependencies between the main factors affecting the operation of the asynchronous machine was conducted. The results of the model operation in the form of time dependences of the waveform curves of current on the load on engine shaft were obtained. Originality. During simulation the model of AC electric point motor, which satisfies the conditions of adequacy was built. Practical value. On the basis of the constructed model we can study the AC motor in various mode of operation, record and analyze current curve, as a response to various changes

  18. A multi-scale and model approach to estimate future tidal high water statistics in the southern German Bright

    Science.gov (United States)

    Hein, H.; Mai, S.; Mayer, B.; Pohlmann, T.; Barjenbruch, U.

    2012-04-01

    The interactions of tides, external surges, storm surges and waves with an additional role of the coastal bathymetry define the probability of extreme water levels at the coast. Probabilistic analysis and also process based numerical models allow the estimation of future states. From the physical point of view both, deterministic processes and stochastic residuals are the fundamentals of high water statistics. This study uses a so called model chain to reproduce historic statistics of tidal high water levels (Thw) as well as the prediction of future statistics high water levels. The results of the numerical models are post-processed by a stochastic analysis. Recent studies show, that for future extrapolation of extreme Thw nonstationary parametric approaches are required. With the presented methods a better prediction of time depended parameter sets seems possible. The investigation region of this study is the southern German Bright. The model-chain is the representation of a downscaling process, which starts with an emissions scenario. Regional atmospheric and ocean models refine the results of global climate models. The concept of downscaling was chosen to resolve coastal topography sufficiently. The North Sea and estuaries are modeled with the three-dimensional model HAMburg Shelf Ocean Model. The running time includes 150 years (1950 - 2100). Results of four different hindcast runs and also of one future prediction run are validated. Based on multi-scale analysis and the theory of entropy we analyze whether any significant periodicities are represented numerically. Results show that also hindcasting the climate of Thw with a model chain for the last 60 years is a challenging task. For example, an additional modeling activity must be the inclusion of tides into regional climate ocean models. It is found that the statistics of climate variables derived from model results differs from the statistics derived from measurements. E.g. there are considerable shifts in

  19. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  20. Quasi-integrable non-linear Schrödinger models, infinite towers of exactly conserved charges and bright solitons

    Science.gov (United States)

    Blas, H.; do Bonfim, A. C. R.; Vilela, A. M.

    2017-05-01

    Deformations of the focusing non-linear Schrödinger model (NLS) are considered in the context of the quasi-integrability concept. We strengthen the results of JHEP 09 (2012) 103 10.1007/JHEP06(2015)177" TargetType="URL"/> for bright soliton collisions. We addressed the focusing NLS as a complement to the one in JHEP 03 (2016) 005 10.1007/JHEP06(2015)177" TargetType="URL"/> , in which the modified defocusing NLS models with dark solitons were shown to exhibit an infinite tower of exactly conserved charges. We show, by means of analytical and numerical methods, that for certain two-bright-soliton solutions, in which the modulus and phase of the complex modified NLS field exhibit even parities under a space-reflection symmetry, the first four and the sequence of even order charges are exactly conserved during the scattering process of the solitons. We perform extensive numerical simulations and consider the bright solitons with deformed potential V=2η /2+\\upepsilon{({|ψ |}^2)}^{2+\\upepsilon},\\upepsilon \\in \\mathbb{R},η <0 . However, for two-soliton field components without definite parity we also show numerically the vanishing of the first non-trivial anomaly and the exact conservation of the relevant charge. So, the parity symmetry seems to be a sufficient but not a necessary condition for the existence of the infinite tower of conserved charges. The model supports elastic scattering of solitons for a wide range of values of the amplitudes and velocities and the set { η, ɛ}. Since the NLS equation is ubiquitous, our results may find potential applications in several areas of non-linear science.

  1. Exact 2-point function in Hermitian matrix model

    International Nuclear Information System (INIS)

    Morozov, A.; Shakirov, Sh.

    2009-01-01

    J. Harer and D. Zagier have found a strikingly simple generating function [1,2] for exact (all-genera) 1-point correlators in the Gaussian Hermitian matrix model. In this paper we generalize their result to 2-point correlators, using Toda integrability of the model. Remarkably, this exact 2-point correlation function turns out to be an elementary function - arctangent. Relation to the standard 2-point resolvents is pointed out. Some attempts of generalization to 3-point and higher functions are described.

  2. Increasing the brightness of light sources

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Ling

    2006-11-16

    In this work the principle of light recycling is applied to artificial light sources in order to achieve brightness enhancement. Firstly, the feasibilities of increasing the brightness of light sources via light recycling are examined theoretically, based on the fundamental laws of thermodynamics including Kirchhoff's law on radiation, Planck's law, Lambert-Beer's law, the etendue conservation and the brightness theorem. From an experimental viewpoint, the radiation properties of three different kinds of light sources including short-arc lamps, incandescent lamps and LEDs characterized by their light-generating mechanisms are investigated. These three types of sources are used in light recycling experiments, for the purpose of 1. validating the intrinsic light recycling effect in light sources, e. g. the intrinsic light recycling effect in incandescent lamps stemming from the coiled filament structure. 2. acquiring the required parameters for establishing physical models, e.g. the emissivity/absorptivity of the short-arc lamps, the intrinsic reflectivity and the external quantum efficiency of LEDs. 3. laying the foundations for designing optics aimed at brightness enhancement according to the characteristics of the sources and applications. Based on the fundamental laws and experiments, two physical models for simulating the radiance distribution of light sources are established, one for thermal filament lamps, the other for luminescent sources, LEDs. As validation of the theoretical and experimental investigation of the light recycling effect, an optical device, the Carambola, is designed for achieving deterministic and multiple light recycling. The Carambola has the function of a concentrator. In order to achieve the maximum possible brightness enhancement with the Carambola, several combinations of sources and Carambolas are modelled in ray-tracing simulations. Sources with different light-emitting mechanisms and different radiation properties

  3. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  4. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  5. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita; Fonseca, Irene; Mascarenhas, M. Luí sa

    2017-01-01

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  6. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita

    2017-09-11

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  7. Quantitative Brightness Analysis of Fluorescence Intensity Fluctuations in E. Coli.

    Directory of Open Access Journals (Sweden)

    Kwang-Ho Hur

    Full Text Available The brightness measured by fluorescence fluctuation spectroscopy specifies the average stoichiometry of a labeled protein in a sample. Here we extended brightness analysis, which has been mainly applied in eukaryotic cells, to prokaryotic cells with E. coli serving as a model system. The small size of the E. coli cell introduces unique challenges for applying brightness analysis that are addressed in this work. Photobleaching leads to a depletion of fluorophores and a reduction of the brightness of protein complexes. In addition, the E. coli cell and the point spread function of the instrument only partially overlap, which influences intensity fluctuations. To address these challenges we developed MSQ analysis, which is based on the mean Q-value of segmented photon count data, and combined it with the analysis of axial scans through the E. coli cell. The MSQ method recovers brightness, concentration, and diffusion time of soluble proteins in E. coli. We applied MSQ to measure the brightness of EGFP in E. coli and compared it to solution measurements. We further used MSQ analysis to determine the oligomeric state of nuclear transport factor 2 labeled with EGFP expressed in E. coli cells. The results obtained demonstrate the feasibility of quantifying the stoichiometry of proteins by brightness analysis in a prokaryotic cell.

  8. The brightness of colour.

    Directory of Open Access Journals (Sweden)

    David Corney

    Full Text Available The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this 'illusion' to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies.Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1, if not earlier in the visual system, since the brightness of colours (as opposed to their luminance accords with activity in V1 as measured with fMRI.The data suggest that perceptions of brightness represent a robust

  9. The Bright Universe Cosmology

    International Nuclear Information System (INIS)

    Surdin, M.

    1980-01-01

    It is shown that viewed from the 'outside', our universe is a black hole. Hence the 'inside' cosmology considered is termed as the Bright Universe Cosmology. The model proposed avoids the singularities of cosmologies of the Big Bang variety, it gives a good account of the redshifts, the cosmic background radiation, the number counts; it also gives a satisfactory explanation of the 'large numbers coincidence' and of the variation in time of fundamental constants. (Auth.)

  10. Photoionization modeling of the LWS fine-structure lines in IR bright galaxies

    Science.gov (United States)

    Satyapal, S.; Luhman, M. L.; Fischer, J.; Greenhouse, M. A.; Wolfire, M. G.

    1997-01-01

    The long wavelength spectrometer (LWS) fine structure line spectra from infrared luminous galaxies were modeled using stellar evolutionary synthesis models combined with photoionization and photodissociation region models. The calculations were carried out by using the computational code CLOUDY. Starburst and active galactic nuclei models are presented. The effects of dust in the ionized region are examined.

  11. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  12. Sand Point, Alaska Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  13. Electron beam brightness with field immersed emission

    International Nuclear Information System (INIS)

    Boyd, J.K.; Neil, V.K.

    1985-01-01

    The beam quality or brightness of an electron beam produced with field immersed emission is studied with two models. First, an envelope formulation is used to determine the scaling of brightness with current, magnetic field and cathode radius, and examine the equilibrium beam radius. Second, the DPC computer code is used to calculate the brightness of two electron beam sources

  14. Sand Point, Alaska Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Sand Point, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....

  15. Toke Point, Washington Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Toke Point, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....

  16. Self-exciting point process in modeling earthquake occurrences

    International Nuclear Information System (INIS)

    Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.

    2017-01-01

    In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)

  17. Comparison of data-driven and model-driven approaches to brightness temperature diurnal cycle interpolation

    CSIR Research Space (South Africa)

    Van den Bergh, F

    2006-01-01

    Full Text Available This paper presents two new schemes for interpolating missing samples in satellite diurnal temperature cycles (DTCs). The first scheme, referred to here as the cosine model, is an improvement of the model proposed in [2] and combines a cosine...

  18. Dew Point modelling using GEP based multi objective optimization

    OpenAIRE

    Shroff, Siddharth; Dabhi, Vipul

    2013-01-01

    Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. We aim to use Gene Expression Programming for modelling of dew point. Generally, accuracy of the model is the only objective used by selection mechanism of GEP. This will evolve...

  19. Point Reyes, California Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Point Reyes, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)...

  20. A hierarchical model exhibiting the Kosterlitz-Thouless fixed point

    International Nuclear Information System (INIS)

    Marchetti, D.H.U.; Perez, J.F.

    1985-01-01

    A hierarchical model for 2-d Coulomb gases displaying a line stable of fixed points describing the Kosterlitz-Thouless phase transition is constructed. For Coulomb gases corresponding to Z sub(N)- models these fixed points are stable for an intermediate temperature interval. (Author) [pt

  1. Inferring Land Surface Model Parameters for the Assimilation of Satellite-Based L-Band Brightness Temperature Observations into a Soil Moisture Analysis System

    Science.gov (United States)

    Reichle, Rolf H.; De Lannoy, Gabrielle J. M.

    2012-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite mission provides global measurements of L-band brightness temperatures at horizontal and vertical polarization and a variety of incidence angles that are sensitive to moisture and temperature conditions in the top few centimeters of the soil. These L-band observations can therefore be assimilated into a land surface model to obtain surface and root zone soil moisture estimates. As part of the observation operator, such an assimilation system requires a radiative transfer model (RTM) that converts geophysical fields (including soil moisture and soil temperature) into modeled L-band brightness temperatures. At the global scale, the RTM parameters and the climatological soil moisture conditions are still poorly known. Using look-up tables from the literature to estimate the RTM parameters usually results in modeled L-band brightness temperatures that are strongly biased against the SMOS observations, with biases varying regionally and seasonally. Such biases must be addressed within the land data assimilation system. In this presentation, the estimation of the RTM parameters is discussed for the NASA GEOS-5 land data assimilation system, which is based on the ensemble Kalman filter (EnKF) and the Catchment land surface model. In the GEOS-5 land data assimilation system, soil moisture and brightness temperature biases are addressed in three stages. First, the global soil properties and soil hydraulic parameters that are used in the Catchment model were revised to minimize the bias in the modeled soil moisture, as verified against available in situ soil moisture measurements. Second, key parameters of the "tau-omega" RTM were calibrated prior to data assimilation using an objective function that minimizes the climatological differences between the modeled L-band brightness temperatures and the corresponding SMOS observations. Calibrated parameters include soil roughness parameters, vegetation structure parameters

  2. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  3. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  4. Analysis of SMOS brightness temperature and vegetation optical depth data with coupled land surface and radiative transfer models in Southern Germany

    Directory of Open Access Journals (Sweden)

    F. Schlenz

    2012-10-01

    Full Text Available Soil Moisture and Ocean Salinity (SMOS L1c brightness temperature and L2 optical depth data are analysed with a coupled land surface (PROMET and radiative transfer model (L-MEB. The coupled models are validated with ground and airborne measurements under contrasting soil moisture, vegetation and land surface temperature conditions during the SMOS Validation Campaign in May and June 2010 in the SMOS test site Upper Danube Catchment in southern Germany. The brightness temperature root-mean-squared errors are between 6 K and 9 K. The L-MEB parameterisation is considered appropriate under local conditions even though it might possibly be further optimised. SMOS L1c brightness temperature data are processed and analysed in the Upper Danube Catchment using the coupled models in 2011 and during the SMOS Validation Campaign 2010 together with airborne L-band brightness temperature data. Only low to fair correlations are found for this comparison (R between 0.1–0.41. SMOS L1c brightness temperature data do not show the expected seasonal behaviour and are positively biased. It is concluded that RFI is responsible for a considerable part of the observed problems in the SMOS data products in the Upper Danube Catchment. This is consistent with the observed dry bias in the SMOS L2 soil moisture products which can also be related to RFI. It is confirmed that the brightness temperature data from the lower SMOS look angles and the horizontal polarisation are less reliable. This information could be used to improve the brightness temperature data filtering before the soil moisture retrieval. SMOS L2 optical depth values have been compared to modelled data and are not considered a reliable source of information about vegetation due to missing seasonal behaviour and a very high mean value. A fairly strong correlation between SMOS L2 soil moisture and optical depth was found (R = 0.65 even though the two variables are considered independent in the

  5. Four point functions in the SL(2,R) WZW model

    Energy Technology Data Exchange (ETDEWEB)

    Minces, Pablo [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina)]. E-mail: minces@iafe.uba.ar; Nunez, Carmen [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina) and Physics Department, University of Buenos Aires, Ciudad Universitaria, Pab. I, 1428 Buenos Aires (Argentina)]. E-mail: carmen@iafe.uba.ar

    2007-04-19

    We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions.

  6. Four point functions in the SL(2,R) WZW model

    International Nuclear Information System (INIS)

    Minces, Pablo; Nunez, Carmen

    2007-01-01

    We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions

  7. A two-point kinetic model for the PROTEUS reactor

    International Nuclear Information System (INIS)

    Dam, H. van.

    1995-03-01

    A two-point reactor kinetic model for the PROTEUS-reactor is developed and the results are described in terms of frequency dependent reactivity transfer functions for the core and the reflector. It is shown that at higher frequencies space-dependent effects occur which imply failure of the one-point kinetic model. In the modulus of the transfer functions these effects become apparent above a radian frequency of about 100 s -1 , whereas for the phase behaviour the deviation from a point model already starts at a radian frequency of 10 s -1 . (orig.)

  8. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  9. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  10. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  11. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  12. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  13. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  14. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  15. Accuracy limit of rigid 3-point water models

    Science.gov (United States)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  16. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  17. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...

  18. From Point Cloud to Textured Model, the Zamani Laser Scanning ...

    African Journals Online (AJOL)

    roshan

    meshed models based on dense points has received mixed reaction from the wide range of potential end users of the final ... data, can be subdivided into the stages of data acquisition, registration, data cleaning, modelling, hole filling ..... provide management tools for site management at local and regional level. The project ...

  19. FINDING CUBOID-BASED BUILDING MODELS IN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    W. Nguatem

    2012-07-01

    Full Text Available In this paper, we present an automatic approach for the derivation of 3D building models of level-of-detail 1 (LOD 1 from point clouds obtained from (dense image matching or, for comparison only, from LIDAR. Our approach makes use of the predominance of vertical structures and orthogonal intersections in architectural scenes. After robustly determining the scene's vertical direction based on the 3D points we use it as constraint for a RANSAC-based search for vertical planes in the point cloud. The planes are further analyzed to segment reliable outlines for rectangular surface within these planes, which are connected to construct cuboid-based building models. We demonstrate that our approach is robust and effective over a range of real-world input data sets with varying point density, amount of noise, and outliers.

  20. Fixed Points in Discrete Models for Regulatory Genetic Networks

    Directory of Open Access Journals (Sweden)

    Orozco Edusmildo

    2007-01-01

    Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.

  1. New analytically solvable models of relativistic point interactions

    International Nuclear Information System (INIS)

    Gesztesy, F.; Seba, P.

    1987-01-01

    Two new analytically solvable models of relativistic point interactions in one dimension (being natural extensions of the nonrelativistic δ-resp, δ'-interaction) are considered. Their spectral properties in the case of finitely many point interactions as well as in the periodic case are fully analyzed. Moreover the spectrum is explicitely determined in the case of independent, identically distributed random coupling constants and the analog of the Saxon and Huther conjecture concerning gaps in the energy spectrum of such systems is derived

  2. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  3. Modeling the Distributions of Brightness Temperatures of a Cropland Study Area Using a Model that Combines Fast Radiosity and Energy Budget Methods

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2018-05-01

    Full Text Available Land surface temperatures (LSTs obtained from remote sensing data are crucial in monitoring the conditions of crops and urban heat islands. However, since retrieved LSTs represent only the average temperature states of pixels, the distributions of temperatures within individual pixels remain unknown. Such data cannot satisfy the requirements of applications such as precision agriculture. Therefore, in this paper, we propose a model that combines a fast radiosity model, the Radiosity Applicable to Porous IndiviDual Objects (RAPID model, and energy budget methods to dynamically simulate brightness temperatures (BTs over complex surfaces. This model represents a model-based tool that can be used to estimate temperature distributions using fine-scale visible as well as near-infrared (VNIR data and temporal variations in meteorological conditions. The proposed model is tested over a study area in an artificial oasis in Northwestern China. The simulated BTs agree well with those measured with the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER. The results reflect root mean squared errors (RMSEs less than 1.6 °C and coefficients of determination (R2 greater than 0.7. In addition, compared to the leaf area index (LAI, this model displays high sensitivity to wind speed during validation. Although simplifications may be adopted for use in specific simulations, this proposed model can be used to support in situ measurements and to provide reference data over heterogeneous vegetation surfaces.

  4. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  5. A point particle model of lightly bound skyrmions

    Directory of Open Access Journals (Sweden)

    Mike Gillard

    2017-04-01

    Full Text Available A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1≤B≤8 obtained by numerical simulation of the full field theory. For 9≤B≤23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein–Rubinstein constraints, is devised.

  6. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  7. Burkina Faso - BRIGHT II

    Data.gov (United States)

    Millennium Challenge Corporation — Millennium Challenge Corporation hired Mathematica Policy Research to conduct an independent evaluation of the BRIGHT II program. The three main research questions...

  8. An Improved Nonlinear Five-Point Model for Photovoltaic Modules

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.

  9. Dim point target detection against bright background

    Science.gov (United States)

    Zhang, Yao; Zhang, Qiheng; Xu, Zhiyong; Xu, Junping

    2010-05-01

    For target detection within a large-field cluttered background from a long distance, several difficulties, involving low contrast between target and background, little occupancy, illumination ununiformity caused by vignetting of lens, and system noise, make it a challenging problem. The existing approaches to dim target detection can be roughly divided into two categories: detection before tracking (DBT) and tracking before detection (TBD). The DBT-based scheme has been widely used in practical applications due to its simplicity, but it often requires working in the situation with a higher signal-to-noise ratio (SNR). In contrast, the TBD-based methods can provide impressive detection results even in the cases of very low SNR; unfortunately, the large memory requirement and high computational load prevents these methods from real-time tasks. In this paper, we propose a new method for dim target detection. We address this problem by combining the advantages of the DBT-based scheme in computational efficiency and of the TBD-based in detection capability. Our method first predicts the local background, and then employs the energy accumulation and median filter to remove background clutter. The dim target is finally located by double window filtering together with an improved high order correlation which speeds up the convergence. The proposed method is implemented on a hardware platform and performs suitably in outside experiments.

  10. Multi-Valued Modal Fixed Point Logics for Model Checking

    Science.gov (United States)

    Nishizawa, Koki

    In this paper, I will show how multi-valued logics are used for model checking. Model checking is an automatic technique to analyze correctness of hardware and software systems. A model checker is based on a temporal logic or a modal fixed point logic. That is to say, a system to be checked is formalized as a Kripke model, a property to be satisfied by the system is formalized as a temporal formula or a modal formula, and the model checker checks that the Kripke model satisfies the formula. Although most existing model checkers are based on 2-valued logics, recently new attempts have been made to extend the underlying logics of model checkers to multi-valued logics. I will summarize these new results.

  11. A case study on point process modelling in disease mapping

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor

    2005-01-01

    of the risk on the covariates. Instead of using the common areal level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo...... methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics....

  12. Multivariate Product-Shot-noise Cox Point Process Models

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge

    We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....

  13. The quantum nonlinear Schroedinger model with point-like defect

    International Nuclear Information System (INIS)

    Caudrelier, V; Mintchev, M; Ragoucy, E

    2004-01-01

    We establish a family of point-like impurities which preserve the quantum integrability of the nonlinear Schroedinger model in 1+1 spacetime dimensions. We briefly describe the construction of the exact second quantized solution of this model in terms of an appropriate reflection-transmission algebra. The basic physical properties of the solution, including the spacetime symmetry of the bulk scattering matrix, are also discussed. (letter to the editor)

  14. BrightFocus Foundation

    Science.gov (United States)

    ... About BrightFocus Foundation Featured Content BrightFocus: Investing in Science to Save Mind and Sight We're here to help. Explore ... recognition is very important. Monday, November 6, 2017 New Diagnosis? Managing a mind and sight disease is a journey. And you’ ...

  15. Integrated modeling and analysis methodology for precision pointing applications

    Science.gov (United States)

    Gutierrez, Homero L.

    2002-07-01

    Space-based optical systems that perform tasks such as laser communications, Earth imaging, and astronomical observations require precise line-of-sight (LOS) pointing. A general approach is described for integrated modeling and analysis of these types of systems within the MATLAB/Simulink environment. The approach can be applied during all stages of program development, from early conceptual design studies to hardware implementation phases. The main objective is to predict the dynamic pointing performance subject to anticipated disturbances and noise sources. Secondary objectives include assessing the control stability, levying subsystem requirements, supporting pointing error budgets, and performing trade studies. The integrated model resides in Simulink, and several MATLAB graphical user interfaces (GUI"s) allow the user to configure the model, select analysis options, run analyses, and process the results. A convenient parameter naming and storage scheme, as well as model conditioning and reduction tools and run-time enhancements, are incorporated into the framework. This enables the proposed architecture to accommodate models of realistic complexity.

  16. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    Energy Technology Data Exchange (ETDEWEB)

    Tudora, A. [University of Bucharest, Faculty of Physics, Bucharest Magurele (Romania); Hambsch, F.J. [European Commission, Joint Research Centre, Directorate G - Nuclear Safety and Security, Unit G2, Geel (Belgium)

    2017-08-15

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for {sup 252}Cf(SF) and {sup 235}U(n,f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν(A,TKE) and γ-ray energy E{sub γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A (e.g., ν(A), E{sub γ}(A), left angle ε right angle (A) etc.), as a function of TKE (e.g., ν(TKE), E{sub γ}(TKE)) up to the prompt neutron distribution P(ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning

  17. Recent tests of the equilibrium-point hypothesis (lambda model).

    Science.gov (United States)

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  18. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    Science.gov (United States)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  19. Modeling molecular boiling points using computed interaction energies.

    Science.gov (United States)

    Peterangelo, Stephen C; Seybold, Paul G

    2017-12-20

    The noncovalent van der Waals interactions between molecules in liquids are typically described in textbooks as occurring between the total molecular dipoles (permanent, induced, or transient) of the molecules. This notion was tested by examining the boiling points of 67 halogenated hydrocarbon liquids using quantum chemically calculated molecular dipole moments, ionization potentials, and polarizabilities obtained from semi-empirical (AM1 and PM3) and ab initio Hartree-Fock [HF 6-31G(d), HF 6-311G(d,p)], and density functional theory [B3LYP/6-311G(d,p)] methods. The calculated interaction energies and an empirical measure of hydrogen bonding were employed to model the boiling points of the halocarbons. It was found that only terms related to London dispersion energies and hydrogen bonding proved significant in the regression analyses, and the performances of the models generally improved at higher levels of quantum chemical computation. An empirical estimate for the molecular polarizabilities was also tested, and the best models for the boiling points were obtained using either this empirical polarizability itself or the polarizabilities calculated at the B3LYP/6-311G(d,p) level, along with the hydrogen-bonding parameter. The results suggest that the cohesive forces are more appropriately described as resulting from highly localized interactions rather than interactions between the global molecular dipoles.

  20. Third generation masses from a two Higgs model fixed point

    International Nuclear Information System (INIS)

    Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.

    1990-01-01

    The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)

  1. Stunningly bright optical emission

    Science.gov (United States)

    Heinke, Craig O.

    2017-12-01

    The detection of bright, rapid optical pulsations from pulsar PSR J1023+0038 have provided a surprise for researchers working on neutron stars. This discovery poses more questions than it answers and will spur on future work and instrumentation.

  2. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  3. Zirconium - ab initio modelling of point defects diffusion

    International Nuclear Information System (INIS)

    Gasca, Petrica

    2010-01-01

    Zirconium is the main element of the cladding found in pressurized water reactors, under an alloy form. Under irradiation, the cladding elongate significantly, phenomena attributed to the vacancy dislocation loops growth in the basal planes of the hexagonal compact structure. The understanding of the atomic scale mechanisms originating this process motivated this work. Using the ab initio atomic modeling technique we studied the structure and mobility of point defects in Zirconium. This led us to find four interstitial point defects with formation energies in an interval of 0.11 eV. The migration paths study allowed the discovery of activation energies, used as entry parameters for a kinetic Monte Carlo code. This code was developed for calculating the diffusion coefficient of the interstitial point defect. Our results suggest a migration parallel to the basal plane twice as fast as one parallel to the c direction, with an activation energy of 0.08 eV, independent of the direction. The vacancy diffusion coefficient, estimated with a two-jump model, is also anisotropic, with a faster process in the basal planes than perpendicular to them. Hydrogen influence on the vacancy dislocation loops nucleation was also studied, due to recent experimental observations of cladding growth acceleration in the presence of this element [fr

  4. Dissipative N-point-vortex Models in the Plane

    Science.gov (United States)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  5. Energy-exchange collisions of dark-bright-bright vector solitons.

    Science.gov (United States)

    Radhakrishnan, R; Manikandan, N; Aravinthan, K

    2015-12-01

    We find a dark component guiding the practically interesting bright-bright vector one-soliton to two different parametric domains giving rise to different physical situations by constructing a more general form of three-component dark-bright-bright mixed vector one-soliton solution of the generalized Manakov model with nine free real parameters. Moreover our main investigation of the collision dynamics of such mixed vector solitons by constructing the multisoliton solution of the generalized Manakov model with the help of Hirota technique reveals that the dark-bright-bright vector two-soliton supports energy-exchange collision dynamics. In particular the dark component preserves its initial form and the energy-exchange collision property of the bright-bright vector two-soliton solution of the Manakov model during collision. In addition the interactions between bound state dark-bright-bright vector solitons reveal oscillations in their amplitudes. A similar kind of breathing effect was also experimentally observed in the Bose-Einstein condensates. Some possible ways are theoretically suggested not only to control this breathing effect but also to manage the beating, bouncing, jumping, and attraction effects in the collision dynamics of dark-bright-bright vector solitons. The role of multiple free parameters in our solution is examined to define polarization vector, envelope speed, envelope width, envelope amplitude, grayness, and complex modulation of our solution. It is interesting to note that the polarization vector of our mixed vector one-soliton evolves in sphere or hyperboloid depending upon the initial parametric choices.

  6. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  7. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  8. A relativistic point coupling model for nuclear structure calculations

    International Nuclear Information System (INIS)

    Buervenich, T.; Maruhn, J.A.; Madland, D.G.; Reinhard, P.G.

    2002-01-01

    A relativistic point coupling model is discussed focusing on a variety of aspects. In addition to the coupling using various bilinear Dirac invariants, derivative terms are also included to simulate finite-range effects. The formalism is presented for nuclear structure calculations of ground state properties of nuclei in the Hartree and Hartree-Fock approximations. Different fitting strategies for the determination of the parameters have been applied and the quality of the fit obtainable in this model is discussed. The model is then compared more generally to other mean-field approaches both formally and in the context of applications to ground-state properties of known and superheavy nuclei. Perspectives for further extensions such as an exact treatment of the exchange terms using a higher-order Fierz transformation are discussed briefly. (author)

  9. Self-Exciting Point Process Modeling of Conversation Event Sequences

    Science.gov (United States)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  10. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  11. The Critical Point Entanglement and Chaos in the Dicke Model

    Directory of Open Access Journals (Sweden)

    Lina Bao

    2015-07-01

    Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.

  12. Multiplicative point process as a model of trading activity

    Science.gov (United States)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  13. Two-point model for electron transport in EBT

    International Nuclear Information System (INIS)

    Chiu, S.C.; Guest, G.E.

    1980-01-01

    The electron transport in EBT is simulated by a two-point model corresponding to the central plasma and the edge. The central plasma is assumed to obey neoclassical collisionless transport. The edge plasma is assumed turbulent and modeled by Bohm diffusion. The steady-state temperatures and densities in both regions are obtained as functions of neutral influx and microwave power. It is found that as the neutral influx decreases and power increases, the edge density decreases while the core density increases. We conclude that if ring instability is responsible for the T-M mode transition, and if stability is correlated with cold electron density at the edge, it will depend sensitively on ambient gas pressure and microwave power

  14. A Thermodynamic Point of View on Dark Energy Models

    Directory of Open Access Journals (Sweden)

    Vincenzo F. Cardone

    2017-07-01

    Full Text Available We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios.

  15. Two-point functions in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.

  16. SALLY, Dynamic Behaviour of Reactor Cooling Channel by Point Model

    International Nuclear Information System (INIS)

    Reiche, Chr.; Ziegenbein, D.

    1981-01-01

    1 - Nature of the physical problem solved: The dynamical behaviour of a cooling channel is calculated. Starting from an equilibrium state a perturbation is introduced into the system. That may be an outer reactivity perturbation or a change in the coolant velocity or in the coolant temperature. The neutron kinetics is treated in the framework of the one-point model. The cooling channel consists of a cladded and cooled fuel rod. The temperature distribution is taken into account as an array above a mesh of radial zones and axial layers. Heat transfer is considered in radial direction only, the thermodynamical coupling of the different layers is obtained by the coolant flow. The thermal material parameters are considered to be temperature independent. Reactivity feedback is introduced by means of reactivity coefficients for fuel, canning, and coolant. Doppler broadening is included. The first cooling cycle can be taken into account by a simple model. 2 - Method of solution: The integration of the point kinetics equations is done numerically by the P11 scheme. The system of temperature equations with constant heat resistance coefficients is solved by the method of factorization. 3 - Restrictions on the complexity of the problem: Given limits are: 10 radial fuel zones, 25 axial layers, 6 groups of delayed neutrons

  17. Two-point functions in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2017-03-07

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.

  18. A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.

  19. Defining the end-point of mastication: A conceptual model.

    Science.gov (United States)

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these

  20. MIDAS/PK code development using point kinetics model

    International Nuclear Information System (INIS)

    Song, Y. M.; Park, S. H.

    1999-01-01

    In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation

  1. Modeling elephant-mediated cascading effects of water point closure.

    Science.gov (United States)

    Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F

    2015-03-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically

  2. Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger

    We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...

  3. Forecasting Macedonian Business Cycle Turning Points Using Qual Var Model

    Directory of Open Access Journals (Sweden)

    Petrovska Magdalena

    2016-09-01

    Full Text Available This paper aims at assessing the usefulness of leading indicators in business cycle research and forecast. Initially we test the predictive power of the economic sentiment indicator (ESI within a static probit model as a leading indicator, commonly perceived to be able to provide a reliable summary of the current economic conditions. We further proceed analyzing how well an extended set of indicators performs in forecasting turning points of the Macedonian business cycle by employing the Qual VAR approach of Dueker (2005. In continuation, we evaluate the quality of the selected indicators in pseudo-out-of-sample context. The results show that the use of survey-based indicators as a complement to macroeconomic data work satisfactory well in capturing the business cycle developments in Macedonia.

  4. High Brightness OLED Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, Jeffrey [OLEDWorks LLC; Kondakova, Marina [OLEDWorks LLC; Boroson, Michael [OLEDWorks LLC; Hamer, John [OLEDWorks LLC

    2016-05-25

    In this work we describe the technology developments behind our current and future generations of high brightness OLED lighting panels. We have developed white and amber OLEDs with excellent performance based on the stacking approach. Current products achieve 40-60 lm/W, while future developments focus on achieving 80 lm/W or higher.

  5. High brightness ion source

    International Nuclear Information System (INIS)

    Dreyfus, R.W.; Hodgson, R.T.

    1975-01-01

    A high brightness ion beam is obtainable by using lasers to excite atoms or molecules from the ground state to an ionized state in increments, rather than in one step. The spectroscopic resonances of the atom or molecule are used so that relatively long wavelength, low power lasers can be used to obtain such ion beam

  6. Quantitative Image Restoration in Bright Field Optical Microscopy.

    Science.gov (United States)

    Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús

    2017-11-07

    Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Intermittent episodes of bright light suppress myopia in the chicken more than continuous bright light.

    Science.gov (United States)

    Lan, Weizhong; Feldkaemper, Marita; Schaeffel, Frank

    2014-01-01

    Bright light has been shown a powerful inhibitor of myopia development in animal models. We studied which temporal patterns of bright light are the most potent in suppressing deprivation myopia in chickens. Eight-day-old chickens wore diffusers over one eye to induce deprivation myopia. A reference group (n = 8) was kept under office-like illuminance (500 lux) at a 10:14 light:dark cycle. Episodes of bright light (15 000 lux) were super-imposed on this background as follows. Paradigm I: exposure to constant bright light for either 1 hour (n = 5), 2 hours (n = 5), 5 hours (n = 4) or 10 hours (n = 4). Paradigm II: exposure to repeated cycles of bright light with 50% duty cycle and either 60 minutes (n = 7), 30 minutes (n = 8), 15 minutes (n = 6), 7 minutes (n = 7) or 1 minute (n = 7) periods, provided for 10 hours. Refraction and axial length were measured prior to and immediately after the 5-day experiment. Relative changes were analyzed by paired t-tests, and differences among groups were tested by one-way ANOVA. Compared with the reference group, exposure to continuous bright light for 1 or 2 hours every day had no significant protective effect against deprivation myopia. Inhibition of myopia became significant after 5 hours of bright light exposure but extending the duration to 10 hours did not offer an additional benefit. In comparison, repeated cycles of 1:1 or 7:7 minutes of bright light enhanced the protective effect against myopia and could fully suppress its development. The protective effect of bright light depends on the exposure duration and, to the intermittent form, the frequency cycle. Compared to the saturation effect of continuous bright light, low frequency cycles of bright light (1:1 min) provided the strongest inhibition effect. However, our quantitative results probably might not be directly translated into humans, but rather need further amendments in clinical studies.

  8. Impact of selected troposphere models on Precise Point Positioning convergence

    Science.gov (United States)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  9. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  10. Two-point boundary correlation functions of dense loop models

    Directory of Open Access Journals (Sweden)

    Alexi Morin-Duchesne, Jesper Lykke Jacobsen

    2018-06-01

    Full Text Available We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a and a pair of defects (b in a Dirichlet boundary condition, the transition (c between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d, loops (e and boundary segments (f in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\\beta = 0$, we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\\Delta = -\\frac18$, $0$, $-\\frac3{32}$, $\\frac38$, $1$, $\\tfrac \\theta \\pi (1+\\tfrac{2\\theta}\\pi$, where $\\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\\log|x-y|$ dependence from the asymptotics, and a $\\ln (\\ln n$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\\beta = 2 \\cos \\lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\\Delta = \\Delta_{1,2}$, $\\Delta_{1,3}$, $\\Delta_{0,\\frac12}$, $\\Delta_{1,0}$, $\\Delta_{1,-1}$ and $\\Delta_{\\frac{2\\theta}\\lambda+1,\\frac{2\\theta}\\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur.

  11. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  12. Multiscale Modeling of Point and Line Defects in Cubic Lattices

    National Research Council Canada - National Science Library

    Chung, P. W; Clayton, J. D

    2007-01-01

    .... This multiscale theory explicitly captures heterogeneity in microscopic atomic motion in crystalline materials, attributed, for example, to the presence of various point and line lattice defects...

  13. Kiloamp high-brightness beams

    International Nuclear Information System (INIS)

    Caporaso, G.J.

    1987-01-01

    Brightness preservation of high-current relativistic electron beams under two different types of transport is discussed. Recent progress in improving the brightness of laser-guided beams in the Advanced Test Accelerator is reviewed. A strategy for the preservation of the brightness of space-charge-dominated beams in a solenoidal transport system is presented

  14. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    Science.gov (United States)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    , SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.

  15. Intermittent Episodes of Bright Light Suppress Myopia in the Chicken More than Continuous Bright Light

    OpenAIRE

    Lan, Weizhong; Feldkaemper, Marita; Schaeffel, Frank

    2014-01-01

    PURPOSE: Bright light has been shown a powerful inhibitor of myopia development in animal models. We studied which temporal patterns of bright light are the most potent in suppressing deprivation myopia in chickens. METHODS: Eight-day-old chickens wore diffusers over one eye to induce deprivation myopia. A reference group (n = 8) was kept under office-like illuminance (500 lux) at a 10:14 light:dark cycle. Episodes of bright light (15 000 lux) were super-imposed on this background as follows....

  16. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  17. Temperature distribution model for the semiconductor dew point detector

    Science.gov (United States)

    Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.

    2001-08-01

    The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.

  18. Time-series photometric spot modeling. 2: Fifteen years of photometry of the bright RS CVn binary HR 7275

    Science.gov (United States)

    Strassmeier, K. G.; Hall, D. S.; Henry, G. W.

    1994-01-01

    We present a time-dependent spot modeling analysis of 15 consecutive years of V-band photometry of the long-period (P(sub orb) = 28.6 days) RS CVn binary HR 7275. This baseline in time is one of the longest, uninterrupted intervals a spotted star has been observed. The spot modeling analysis yields a total of 20 different spots throughout the time span of our observations. The distribution of the observed spot migration rates is consistent with solar-type differential rotation and suggests a lower limit of the differential-rotation coefficient of 0.022 +/-0.004. The observed, maximum lifetime of a single spot (or spot group) is 4.5 years, the minimum lifetime is approximately one year, but an average spot lives for 2.2 years. If we assume that the mechanical shear by differential rotation sets the upper limit to the spot lifetime, the observed maximum lifetime in turn sets an upper limit to the differential-rotation coefficient, namely 0.04 +/- 0.01. This would be differential rotation just 5 to 8 times less than the solar value and one of the strongest among active binaries. We found no conclusive evidence for the existence of a periodic phenomenon that could be attributed to a stellar magnetic cycle.

  19. A New Blind Pointing Model Improves Large Reflector Antennas Precision Pointing at Ka-Band (32 GHz)

    Science.gov (United States)

    Rochblatt, David J.

    2009-01-01

    The National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL)-Deep Space Network (DSN) subnet of 34-m Beam Waveguide (BWG) Antennas was recently upgraded with Ka-Band (32-GHz) frequency feeds for space research and communication. For normal telemetry tracking a Ka-Band monopulse system is used, which typically yields 1.6-mdeg mean radial error (MRE) pointing accuracy on the 34-m diameter antennas. However, for the monopulse to be able to acquire and lock, for special radio science applications where monopulse cannot be used, or as a back-up for the monopulse, high-precision open-loop blind pointing is required. This paper describes a new 4th order pointing model and calibration technique, which was developed and applied to the DSN 34-m BWG antennas yielding 1.8 to 3.0-mdeg MRE pointing accuracy and amplitude stability of 0.2 dB, at Ka-Band, and successfully used for the CASSINI spacecraft occultation experiment at Saturn and Titan. In addition, the new 4th order pointing model was used during a telemetry experiment at Ka-Band (32 GHz) utilizing the Mars Reconnaissance Orbiter (MRO) spacecraft while at a distance of 0.225 astronomical units (AU) from Earth and communicating with a DSN 34-m BWG antenna at a record high rate of 6-megabits per second (Mb/s).

  20. A case study on point process modelling in disease mapping

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Moller, J.; Waagepetersen, R.

    2005-01-01

    Roč. 24, č. 3 (2005), s. 159-168 ISSN 1580-3139 R&D Projects: GA MŠk 0021620839; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z10750506 Keywords : log Gaussian Cox point process * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research

  1. Business models & business cases for point-of-care testing

    NARCIS (Netherlands)

    Staring, A.J.; Meertens, L. O.; Sikkel, N.

    2016-01-01

    Point-Of-Care Testing (POCT) enables clinical tests at or near the patient, with test results that are available instantly or in a very short time frame, to assist caregivers with immediate diagnosis and/or clinical intervention. The goal of POCT is to provide accurate, reliable, fast, and

  2. Modeling elephant-mediated cascading effects of water point closure

    NARCIS (Netherlands)

    Hilbers, J.P.; Langevelde, van F.; Prins, H.H.T.; Grant, C.C.; Peel, M.; Coughenour, M.B.; Knegt, de H.J.; Slotow, R.; Smit, I.; Kiker, G.A.; Boer, de W.F.

    2015-01-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are however alternative ways to control wildlife densities, such as opening or closing water points. The

  3. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  4. Model uncertainty from a regulatory point of view

    International Nuclear Information System (INIS)

    Abramson, L.R.

    1994-01-01

    This paper discusses model uncertainty in the larger context of knowledge and random uncertainty. It explores some regulatory implications of model uncertainty and argues that, from a regulator's perspective, a conservative approach must be taken. As a consequence of this perspective, averaging over model results is ruled out

  5. High brightness electron accelerator

    International Nuclear Information System (INIS)

    Sheffield, R.L.; Carlsten, B.E.; Young, L.M.

    1994-01-01

    A compact high brightness linear accelerator is provided for use, e.g., in a free electron laser. The accelerator has a first plurality of accelerating cavities having end walls with four coupling slots for accelerating electrons to high velocities in the absence of quadrupole fields. A second plurality of cavities receives the high velocity electrons for further acceleration, where each of the second cavities has end walls with two coupling slots for acceleration in the absence of dipole fields. The accelerator also includes a first cavity with an extended length to provide for phase matching the electron beam along the accelerating cavities. A solenoid is provided about the photocathode that emits the electrons, where the solenoid is configured to provide a substantially uniform magnetic field over the photocathode surface to minimize emittance of the electrons as the electrons enter the first cavity. 5 figs

  6. Neutral-point voltage dynamic model of three-level NPC inverter for reactive load

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Munk-Nielsen, Stig; Busquets-Monge, Sergio

    2012-01-01

    A three-level neutral-point-clamped inverter needs a controller for the neutral-point voltage. Typically, the controller design is based on a dynamic model. The dynamic model of the neutral-point voltage depends on the pulse width modulation technique used for the inverter. A pulse width modulati...

  7. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  8. Extreme Ultraviolet Explorer Bright Source List

    Science.gov (United States)

    Malina, Roger F.; Marshall, Herman L.; Antia, Behram; Christian, Carol A.; Dobson, Carl A.; Finley, David S.; Fruscione, Antonella; Girouard, Forrest R.; Hawkins, Isabel; Jelinsky, Patrick

    1994-01-01

    Initial results from the analysis of the Extreme Ultraviolet Explorer (EUVE) all-sky survey (58-740 A) and deep survey (67-364 A) are presented through the EUVE Bright Source List (BSL). The BSL contains 356 confirmed extreme ultraviolet (EUV) point sources with supporting information, including positions, observed EUV count rates, and the identification of possible optical counterparts. One-hundred twenty-six sources have been detected longward of 200 A.

  9. Generalized correlation of latent heats of vaporization of coal liquid model compounds between their freezing points and critical points

    Energy Technology Data Exchange (ETDEWEB)

    Sivaraman, A.; Kobuyashi, R.; Mayee, J.W.

    1984-02-01

    Based on Pitzer's three-parameter corresponding states principle, the authors have developed a correlation of the latent heat of vaporization of aromatic coal liquid model compounds for a temperature range from the freezing point to the critical point. An expansion of the form L = L/sub 0/ + ..omega..L /sub 1/ is used for the dimensionless latent heat of vaporization. This model utilizes a nonanalytic functional form based on results derived from renormalization group theory of fluids in the vicinity of the critical point. A simple expression for the latent heat of vaporization L = D/sub 1/epsilon /SUP 0.3333/ + D/sub 2/epsilon /SUP 0.8333/ + D/sub 4/epsilon /SUP 1.2083/ + E/sub 1/epsilon + E/sub 2/epsilon/sup 2/ + E/sub 3/epsilon/sup 3/ is cast in a corresponding states principle correlation for coal liquid compounds. Benzene, the basic constituent of the functional groups of the multi-ring coal liquid compounds, is used as the reference compound in the present correlation. This model works very well at both low and high reduced temperatures approaching the critical point (0.02 < epsilon = (T /SUB c/ - T)/(T /SUB c/- 0.69)). About 16 compounds, including single, two, and three-ring compounds, have been tested and the percent root-mean-square deviations in latent heat of vaporization reported and estimated through the model are 0.42 to 5.27%. Tables of the coefficients of L/sub 0/ and L/sub 1/ are presented. The contributing terms of the latent heat of vaporization function are also presented in a table for small increments of epsilon.

  10. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  11. AUTOMATED CALIBRATION OF FEM MODELS USING LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    B. Riveiro

    2018-05-01

    Full Text Available In present work it is pretended to estimate elastic parameters of beams through the combined use of precision geomatic techniques (laser scanning and structural behaviour simulation tools. The study has two aims, on the one hand, to develop an algorithm able to interpret automatically point clouds acquired by laser scanning systems of beams subjected to different load situations on experimental tests; and on the other hand, to minimize differences between deformation values given by simulation tools and those measured by laser scanning. In this way we will proceed to identify elastic parameters and boundary conditions of structural element so that surface stresses can be estimated more easily.

  12. Robust fitting of diurnal brightness temperature cycle

    CSIR Research Space (South Africa)

    Udahemuka, G

    2007-11-01

    Full Text Available for a pixel concerned. Robust fitting of observed Diurnal Temperature Cycle (DTC) taken over a day of a given pixel without cloud cover and other abnormally conditions such as fire can give a data based brightness temperature model for a given pixel...

  13. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  14. Sand Point, Alaska MHW Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  15. Lyapunov functions for the fixed points of the Lorenz model

    International Nuclear Information System (INIS)

    Bakasov, A.A.; Govorkov, B.B. Jr.

    1992-11-01

    We have shown how the explicit Lyapunov functions can be constructed in the framework of a regular procedure suggested and completed by Lyapunov a century ago (''method of critical cases''). The method completely covers all practically encountering subtle cases of stability study for ordinary differential equations when the linear stability analysis fails. These subtle cases, ''the critical cases'', according to Lyapunov, include both bifurcations of solutions and solutions of systems with symmetry. Being properly specialized and actually powerful in case of ODE's, this Lyapunov's method is formulated in simple language and should attract a wide interest of the physical audience. The method leads to inevitable construction of the explicit Lyapunov function, takes automatically into account the Fredholm alternative and avoids infinite step calculations. Easy and apparent physical interpretation of the Lyapunov function as a potential or as a time-dependent entropy provides one with more details about the local dynamics of the system at non-equilibrium phase transition points. Another advantage is that this Lyapunov's method consists of a set of very detailed explicit prescriptions which allow one to easy programmize the method for a symbolic processor. The application of the Lyapunov theory for critical cases has been done in this work to the real Lorenz equations and it is shown, in particular, that increasing σ at the Hopf bifurcation point suppresses the contribution of one of the variables to the destabilization of the system. The relation of the method to contemporary methods and its place among them have been clearly and extensively discussed. Due to Appendices, the paper is self-contained and does not require from a reader to approach results published only in Russian. (author). 38 refs

  16. Intermittent episodes of bright light suppress myopia in the chicken more than continuous bright light.

    Directory of Open Access Journals (Sweden)

    Weizhong Lan

    Full Text Available PURPOSE: Bright light has been shown a powerful inhibitor of myopia development in animal models. We studied which temporal patterns of bright light are the most potent in suppressing deprivation myopia in chickens. METHODS: Eight-day-old chickens wore diffusers over one eye to induce deprivation myopia. A reference group (n = 8 was kept under office-like illuminance (500 lux at a 10:14 light:dark cycle. Episodes of bright light (15 000 lux were super-imposed on this background as follows. Paradigm I: exposure to constant bright light for either 1 hour (n = 5, 2 hours (n = 5, 5 hours (n = 4 or 10 hours (n = 4. Paradigm II: exposure to repeated cycles of bright light with 50% duty cycle and either 60 minutes (n = 7, 30 minutes (n = 8, 15 minutes (n = 6, 7 minutes (n = 7 or 1 minute (n = 7 periods, provided for 10 hours. Refraction and axial length were measured prior to and immediately after the 5-day experiment. Relative changes were analyzed by paired t-tests, and differences among groups were tested by one-way ANOVA. RESULTS: Compared with the reference group, exposure to continuous bright light for 1 or 2 hours every day had no significant protective effect against deprivation myopia. Inhibition of myopia became significant after 5 hours of bright light exposure but extending the duration to 10 hours did not offer an additional benefit. In comparison, repeated cycles of 1:1 or 7:7 minutes of bright light enhanced the protective effect against myopia and could fully suppress its development. CONCLUSIONS: The protective effect of bright light depends on the exposure duration and, to the intermittent form, the frequency cycle. Compared to the saturation effect of continuous bright light, low frequency cycles of bright light (1:1 min provided the strongest inhibition effect. However, our quantitative results probably might not be directly translated into humans, but rather need further amendments in clinical studies.

  17. Development and evaluation of spatial point process models for epidermal nerve fibers.

    Science.gov (United States)

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. RANS-VOF modelling of the Wavestar point absorber

    DEFF Research Database (Denmark)

    Ransley, E. J.; Greaves, D. M.; Raby, A.

    2017-01-01

    Highlights •A fully nonlinear, coupled model of the Wavestar WEC has been created using open-source CFD software, OpenFOAM®. •The response of the Wavestar WEC is simulated in regular waves with different steepness. •Predictions of body motion, surface elevation, fluid velocity, pressure and load ...

  19. Demystifying the cytokine network: Mathematical models point the way.

    Science.gov (United States)

    Morel, Penelope A; Lee, Robin E C; Faeder, James R

    2017-10-01

    Cytokines provide the means by which immune cells communicate with each other and with parenchymal cells. There are over one hundred cytokines and many exist in families that share receptor components and signal transduction pathways, creating complex networks. Reductionist approaches to understanding the role of specific cytokines, through the use of gene-targeted mice, have revealed further complexity in the form of redundancy and pleiotropy in cytokine function. Creating an understanding of the complex interactions between cytokines and their target cells is challenging experimentally. Mathematical and computational modeling provides a robust set of tools by which complex interactions between cytokines can be studied and analyzed, in the process creating novel insights that can be further tested experimentally. This review will discuss and provide examples of the different modeling approaches that have been used to increase our understanding of cytokine networks. This includes discussion of knowledge-based and data-driven modeling approaches and the recent advance in single-cell analysis. The use of modeling to optimize cytokine-based therapies will also be discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Study of Three-Dimensional Image Brightness Loss in Stereoscopy

    Directory of Open Access Journals (Sweden)

    Hsing-Cheng Yu

    2015-10-01

    Full Text Available When viewing three-dimensional (3D images, whether in cinemas or on stereoscopic televisions, viewers experience the same problem of image brightness loss. This study aims to investigate image brightness loss in 3D displays, with the primary aim being to quantify the image brightness degradation in the 3D mode. A further aim is to determine the image brightness relationship to the corresponding two-dimensional (2D images in order to adjust the 3D-image brightness values. In addition, the photographic principle is used in this study to measure metering values by capturing 2D and 3D images on television screens. By analyzing these images with statistical product and service solutions (SPSS software, the image brightness values can be estimated using the statistical regression model, which can also indicate the impact of various environmental factors or hardware on the image brightness. In analysis of the experimental results, comparison of the image brightness between 2D and 3D images indicates 60.8% degradation in the 3D image brightness amplitude. The experimental values, from 52.4% to 69.2%, are within the 95% confidence interval

  1. Numerical Modeling of a Wave Energy Point Absorber

    DEFF Research Database (Denmark)

    Hernandez, Lorenzo Banos; Frigaard, Peter; Kirkegaard, Poul Henning

    2009-01-01

    The present study deals with numerical modelling of the Wave Star Energy WSE device. Hereby, linear potential theory is applied via a BEM code on the wave hydrodynamics exciting the floaters. Time and frequency domain solutions of the floater response are determined for regular and irregular seas....... Furthermore, these results are used to estimate the power and the energy absorbed by a single oscillating floater. Finally, a latching control strategy is analysed in open-loop configuration for energy maximization....

  2. Numerical schemes for one-point closure turbulence models

    International Nuclear Information System (INIS)

    Larcher, Aurelien

    2010-01-01

    First-order Reynolds Averaged Navier-Stokes (RANS) turbulence models are studied in this thesis. These latter consist of the Navier-Stokes equations, supplemented with a system of balance equations describing the evolution of characteristic scalar quantities called 'turbulent scales'. In so doing, the contribution of the turbulent agitation to the momentum can be determined by adding a diffusive coefficient (called 'turbulent viscosity') in the Navier-Stokes equations, such that it is defined as a function of the turbulent scales. The numerical analysis problems, which are studied in this dissertation, are treated in the frame of a fractional step algorithm, consisting of an approximation on regular meshes of the Navier-Stokes equations by the nonconforming Crouzeix-Raviart finite elements, and a set of scalar convection-diffusion balance equations discretized by the standard finite volume method. A monotone numerical scheme based on the standard finite volume method is proposed so as to ensure that the turbulent scales, like the turbulent kinetic energy (k) and its dissipation rate (ε), remain positive in the case of the standard k - ε model, as well as the k - ε RNG and the extended k - ε - ν 2 models. The convergence of the proposed numerical scheme is then studied on a system composed of the incompressible Stokes equations and a steady convection-diffusion equation, which are both coupled by the viscosities and the turbulent production term. This reduced model allows to deal with the main difficulty encountered in the analysis of such problems: the definition of the turbulent production term leads to consider a class of convection-diffusion problems with an irregular right-hand side belonging to L 1 . Finally, to step towards the unsteady problem, the convergence of the finite volume scheme for a model convection-diffusion equation with L 1 data is proved. The a priori estimates on the solution and on its time derivative are obtained in discrete norms, for

  3. Geographical point cloud modelling with the 3D medial axis transform

    NARCIS (Netherlands)

    Peters, R.Y.

    2018-01-01

    A geographical point cloud is a detailed three-dimensional representation of the geometry of our geographic environment.
    Using geographical point cloud modelling, we are able to extract valuable information from geographical point clouds that can be used for applications in asset management,

  4. A Massless-Point-Charge Model for the Electron

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2010-04-01

    Full Text Available “It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foun- dations. Maxwell’s theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrody- namics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle” [1, p.367]. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff [2] with the theory of the Planck vacuum (PV [3], the basic idea for the model following from [2] with the PV theory adding some important details.

  5. A Massless-Point-Charge Model for the Electron

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2010-04-01

    Full Text Available "It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foundations. Maxwell's theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrodynamics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle" (Grandy W.T. Jr. Relativistic quantum mechanics of leptons and fields. Kluwer Academic Publishers, Dordrecht-London, 1991, p.367. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff with the theory of the Planck vacuum (PV, the basic idea for the model following from Puthoff with the PV theory adding some important details.

  6. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  7. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    Science.gov (United States)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  8. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...

  9. Improved point-kinetics model for the BWR control rod drop accident

    International Nuclear Information System (INIS)

    Neogy, P.; Wakabayashi, T.; Carew, J.F.

    1985-01-01

    A simple prescription to account for spatial feedback weighting effects in RDA (rod drop accident) point-kinetics analyses has been derived and tested. The point-kinetics feedback model is linear in the core peaking factor, F/sub Q/, and in the core average void fraction and fuel temperature. Comparison with detailed spatial kinetics analyses indicates that the improved point-kinetics model provides an accurate description of the BWR RDA

  10. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  11. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    International Nuclear Information System (INIS)

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  12. The ZTF Bright Transient Survey

    Science.gov (United States)

    Fremling, C.; Sharma, Y.; Kulkarni, S. R.; Miller, A. A.; Taggart, K.; Perley, D. A.; Gooba, A.

    2018-06-01

    As a supplement to the Zwicky Transient Facility (ZTF; ATel #11266) public alerts (ATel #11685) we plan to report (following ATel #11615) bright probable supernovae identified in the raw alert stream from the ZTF Northern Sky Survey ("Celestial Cinematography"; see Bellm & Kulkarni, 2017, Nature Astronomy 1, 71) to the Transient Name Server (https://wis-tns.weizmann.ac.il) on a daily basis; the ZTF Bright Transient Survey (BTS; see Kulkarni et al., 2018; arXiv:1710.04223).

  13. Intrinsic brightness temperatures of blazar jets at 15 GHz

    Directory of Open Access Journals (Sweden)

    Hovatta Talvikki

    2013-12-01

    Full Text Available We have developed a new Bayesian Markov Chain Monte Carlo method to deconvolve light curves of blazars into individual flares, including proper estimation of the fit errors. We use the method to fit 15GHzlight curves obtained within the OVRO 40-m blazar monitoring program where a large number of AGN have been monitored since 2008 in support of the Fermi Gamma-Ray Space Telescope mission. The time scales obtained from the fitted models are used to calculate the variability brightness temperature of the sources. Additionally, we have calculated brightness temperatures of a sample of these objects using Very Long Baseline Array data from the MOJAVE survey. Combining these two data sets enables us to study the intrinsic brightness temperature distribution in these blazars at 15 GHz. Our preliminary results indicate that the mean intrinsic brightness temperature in a sample of 14 sources is near the equipartition brightness temperature of ~ 1011K.

  14. Modeling of Maximum Power Point Tracking Controller for Solar Power System

    Directory of Open Access Journals (Sweden)

    Aryuanto Soetedjo

    2012-09-01

    Full Text Available In this paper, a Maximum Power Point Tracking (MPPT controller for solar power system is modeled using MATLAB Simulink. The model consists of PV module, buck converter, and MPPT controller. The contribution of the work is in the modeling of buck converter that allowing the input voltage of the converter, i.e. output voltage of PV is changed by varying the duty cycle, so that the maximum power point could be tracked when the environmental changes. The simulation results show that the developed model performs well in tracking the maximum power point (MPP of the PV module using Perturb and Observe (P&O Algorithm.

  15. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  16. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  17. Electromagnetically induced transparency control in terahertz metasurfaces based on bright-bright mode coupling

    Science.gov (United States)

    Yahiaoui, R.; Burrow, J. A.; Mekonen, S. M.; Sarangan, A.; Mathews, J.; Agha, I.; Searles, T. A.

    2018-04-01

    We demonstrate a classical analog of electromagnetically induced transparency (EIT) in a highly flexible planar terahertz metamaterial (MM) comprised of three-gap split-ring resonators. The keys to achieve EIT in this system are the frequency detuning and hybridization processes between two bright modes coexisting in the same unit cell as opposed to bright-dark modes. We present experimental verification of two bright modes coupling for a terahertz EIT-MM in the context of numerical results and theoretical analysis based on a coupled Lorentz oscillator model. In addition, a hybrid variation of the EIT-MM is proposed and implemented numerically to dynamically tune the EIT window by incorporating photosensitive silicon pads in the split gap region of the resonators. As a result, this hybrid MM enables the active optical control of a transition from the on state (EIT mode) to the off state (dipole mode).

  18. Finding Non-Zero Stable Fixed Points of the Weighted Kuramoto model is NP-hard

    OpenAIRE

    Taylor, Richard

    2015-01-01

    The Kuramoto model when considered over the full space of phase angles [$0,2\\pi$) can have multiple stable fixed points which form basins of attraction in the solution space. In this paper we illustrate the fundamentally complex relationship between the network topology and the solution space by showing that determining the possibility of multiple stable fixed points from the network topology is NP-hard for the weighted Kuramoto Model. In the case of the unweighted model this problem is shown...

  19. A new perspective on the infrared brightness temperature ...

    Indian Academy of Sciences (India)

    CEAWMT), ... temperatures clearly discriminates the cloud pixels of deep convective and ... utilized in the modelling of the histogram of infrared brightness temperature of deep convective and ..... Henderson-Sellers A 1978 Surface type and its effect.

  20. A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis

    OpenAIRE

    Masataka, SUZUKI; Yoshihiko, YAMAZAKI; Yumiko, TANIGUCHI; Department of Psychology, Kinjo Gakuin University; Department of Health and Physical Education, Nagoya Institute of Technology; College of Human Life and Environment, Kinjo Gakuin University

    2003-01-01

    SUZUKI,M., YAMAZAKI,Y. and TANIGUCHI,Y., A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis. Adv. Exerc. Sports Physiol., Vol.9, No.1 pp.7-25, 2003. According to the equilibrium point hypothesis of motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction among moving equilibrium point, reflex feedback and muscle mechanical properties. This approach is attractive as it obviates the n...

  1. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  2. Point kinetics model with one-dimensional (radial) heat conduction formalism

    International Nuclear Information System (INIS)

    Jain, V.K.

    1989-01-01

    A point-kinetics model with one-dimensional (radial) heat conduction formalism has been developed. The heat conduction formalism is based on corner-mesh finite difference method. To get average temperatures in various conducting regions, a novel weighting scheme has been devised. The heat conduction model has been incorporated in the point-kinetics code MRTF-FUEL. The point-kinetics equations are solved using the method of real integrating factors. It has been shown by analysing the simulation of hypothetical loss of regulation accident in NAPP reactor that the model is superior to the conventional one in accuracy and speed of computation. (author). 3 refs., 3 tabs

  3. Prediction model for initial point of net vapor generation for low-flow boiling

    International Nuclear Information System (INIS)

    Sun Qi; Zhao Hua; Yang Ruichang

    2003-01-01

    The prediction of the initial point of net vapor generation is significant for the calculation of phase distribution in sub-cooled boiling. However, most of the investigations were developed in high-flow boiling, and there is no common model that could be successfully applied for the low-flow boiling. A predictive model for the initial point of net vapor generation for low-flow forced convection and natural circulation is established here, by the analysis of evaporation and condensation heat transfer. The comparison between experimental data and calculated results shows that this model can predict the net vapor generation point successfully in low-flow sub-cooled boiling

  4. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  5. Point-Structured Human Body Modeling Based on 3D Scan Data

    Directory of Open Access Journals (Sweden)

    Ming-June Tsai

    2018-01-01

    Full Text Available A novel point-structured geometrical modelling for realistic human body is introduced in this paper. This technique is based on the feature extraction from the 3D body scan data. Anatomic feature such as the neck, the arm pits, the crotch points, and other major feature points are recognized. The body data is then segmented into 6 major parts. A body model is then constructed by re-sampling the scanned data to create a point-structured mesh. The body model contains body geodetic landmarks in latitudinal and longitudinal curves passing through those feature points. The body model preserves the perfect body shape and all the body dimensions but requires little space. Therefore, the body model can be used as a mannequin in garment industry, or as a manikin in various human factor designs, but the most important application is to use as a virtue character to animate the body motion in mocap (motion capture systems. By adding suitable joint freedoms between the segmented body links, kinematic and dynamic properties of the motion theories can be applied to the body model. As a result, a 3D virtual character that is fully resembled the original scanned individual is vividly animating the body motions. The gaps between the body segments due to motion can be filled up by skin blending technique using the characteristic of the point-structured model. The model has the potential to serve as a standardized datatype to archive body information for all custom-made products.

  6. The importance of topographically corrected null models for analyzing ecological point processes.

    Science.gov (United States)

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  7. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    Science.gov (United States)

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  8. Possible Bright Starspots on TRAPPIST-1

    Science.gov (United States)

    Morris, Brett M.; Agol, Eric; Davenport, James R. A.; Hawley, Suzanne L.

    2018-04-01

    The M8V star TRAPPIST-1 hosts seven roughly Earth-sized planets and is a promising target for exoplanet characterization. Kepler/K2 Campaign 12 observations of TRAPPIST-1 in the optical show an apparent rotational modulation with a 3.3-day period, though that rotational signal is not readily detected in the Spitzer light curve at 4.5 μm. If the rotational modulation is due to starspots, persistent dark spots can be excluded from the lack of photometric variability in the Spitzer light curve. We construct a photometric model for rotational modulation due to photospheric bright spots on TRAPPIST-1 that is consistent with both the Kepler and Spitzer light curves. The maximum-likelihood model with three spots has typical spot sizes of R spot/R ⋆ ≈ 0.004 at temperature T spot ≳ 5300 ± 200 K. We also find that large flares are observed more often when the brightest spot is facing the observer, suggesting a correlation between the position of the bright spots and flare events. In addition, these flares may occur preferentially when the spots are increasing in brightness, which suggests that the 3.3-day periodicity may not be a rotational signal, but rather a characteristic timescale of active regions.

  9. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  10. Best for the bright?

    DEFF Research Database (Denmark)

    Aarkrog, Vibe

    2012-01-01

    takes place in school-based education, (Brooker and Butler, 1997). In the Danish dual system the general training will typically take place in VET colleges. However, in this article about the new apprenticeship model in Denmark, the issue is whether the general parts of the VET curricula can...

  11. Teradiode's high brightness semiconductor lasers

    Science.gov (United States)

    Huang, Robin K.; Chann, Bien; Burgess, James; Lochman, Bryan; Zhou, Wang; Cruz, Mike; Cook, Rob; Dugmore, Dan; Shattuck, Jeff; Tayebati, Parviz

    2016-03-01

    TeraDiode is manufacturing multi-kW-class ultra-high brightness fiber-coupled direct diode lasers for industrial applications. A fiber-coupled direct diode laser with a power level of 4,680 W from a 100 μm core diameter, BPP) of 3.5 mm-mrad and is the lowest BPP multi-kW-class direct diode laser yet reported. This laser is suitable for industrial materials processing applications, including sheet metal cutting and welding. This 4-kW fiber-coupled direct diode laser has comparable brightness to that of industrial fiber lasers and CO2 lasers, and is over 10x brighter than state-of-the-art direct diode lasers. We have also demonstrated novel high peak power lasers and high brightness Mid-Infrared Lasers.

  12. On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model

    KAUST Repository

    Al-Quwaiee, Hessa

    2016-06-28

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.

  13. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying

    2014-07-15

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  14. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali

    2014-01-01

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  15. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  16. One loop beta functions and fixed points in higher derivative sigma models

    International Nuclear Information System (INIS)

    Percacci, Roberto; Zanusso, Omar

    2010-01-01

    We calculate the one loop beta functions of nonlinear sigma models in four dimensions containing general two- and four-derivative terms. In the O(N) model there are four such terms and nontrivial fixed points exist for all N≥4. In the chiral SU(N) models there are in general six couplings, but only five for N=3 and four for N=2; we find fixed points only for N=2, 3. In the approximation considered, the four-derivative couplings are asymptotically free but the coupling in the two-derivative term has a nonzero limit. These results support the hypothesis that certain sigma models may be asymptotically safe.

  17. Some application of the model of partition points on a one-dimensional lattice

    International Nuclear Information System (INIS)

    Mejdani, R.

    1991-07-01

    We have shown that by using a model of the gas of partition points on one-dimensional lattice, we can find some results about the enzyme kinetics or the average domain-size, which we have obtained before by using a correlated Walks' theory or a probabilistic (combinatoric) way. We have discussed also the problem related with the spread of an infection of disease and the stochastic model of partition points. We think that this model, as a very simple model and mathematically transparent, can be advantageous for other theoretical investigations in chemistry or modern biology. (author). 14 refs, 6 figs, 1 tab

  18. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  19. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  20. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  1. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  2. Models of bright storm clouds and related dark ovals in Saturn's Storm Alley as constrained by 2008 Cassini/VIMS spectra

    Science.gov (United States)

    Sromovsky, L. A.; Baines, K. H.; Fry, P. M.

    2018-03-01

    A 5° latitude band on Saturn centered near planetocentric latitude 36°S is known as "Storm Alley" because it has been for several extended periods a site of frequent lightning activity and associated thunderstorms, first identified by Porco et al. (2005). The thunderstorms appeared as bright clouds at short and long continuum wavelengths, and over a period of a week or so transformed into dark ovals (Dyudina et al., 2007). The ovals were found to be dark over a wide spectral range, which led Baines et al. (2009) to suggest the possibility that a broadband absorber such as soot produced by lightning could play a significant role in darkening the clouds relative to their surroundings. Here we show that an alternative explanation, which is that the clouds are less reflective because of reduced optical depth, provides an excellent fit to near infrared spectra of similar features obtained by the Cassini Visual and Infrared Mapping Spectrometer (VIMS) in 2008, and leads to a plausible scenario for cloud evolution. We find that the background clouds and the oval clouds are both dominated by the optical properties of a ubiquitous upper cloud layer, which has the same particle size in both regions, but about half the optical depth and physical thickness in the dark oval regions. The dark oval regions are also marked by enhanced emissions in the 5-μm window region, a result of lower optical depth of the deep cloud layer near 3.1-3.8 bar, presumably composed of ammonium hydrosulfide (NH4SH). The bright storm clouds completely block this deep thermal emission with a thick layer of ammonia (NH3) clouds extending from the middle of the main visible cloud layer probably as deep as the 1.7-bar NH3 condensation level. Other condensates might also be present at higher pressures, but are obscured by the NH3 cloud. The strong 3-μm spectral absorption that was displayed by Saturn's Great Storm of 2010-2011 (Sromovsky et al., 2013) is weaker in these storms because the contrast is

  3. State-Of High Brightness RF Photo-Injector Design

    Science.gov (United States)

    Ferrario, Massimo; Clendenin, Jym; Palmer, Dennis; Rosenzweig, James; Serafini, Luca

    2000-04-01

    The art of designing optimized high brightness electron RF Photo-Injectors has moved in the last decade from a cut and try procedure, guided by experimental experience and time consuming particle tracking simulations, up to a fast parameter space scanning, guided by recent analytical results and a fast running semi-analytical code, so to reach the optimum operating point which corresponds to maximum beam brightness. Scaling laws and the theory of invariant envelope provide to the designers excellent tools for a first parameters choice and the code HOMDYN, based on a multi-slice envelope description of the beam dynamics, is tailored to describe the space charge dominated dynamics of laminar beams in presence of time dependent space charge forces, giving rise to a very fast modeling capability for photo-injectors design. We report in this talk the results of a recent beam dynamics study, motivated by the need to redesign the LCLS photoinjector. During this work a new effective working point for a split RF photoinjector has been discovered by means of the previous mentioned approach. By a proper choice of rf gun and solenoid parameters, the emittance evolution shows a double minimum behavior in the drifting region. If the booster is located where the relative emittance maximum and the envelope waist occur, the second emittance minimum can be shifted at the booster exit and frozen at a very low level (0.3 mm-mrad for a 1 nC flat top bunch), to the extent that the invariant envelope matching conditions are satisfied.

  4. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    Science.gov (United States)

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  5. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Science.gov (United States)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  6. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  7. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  8. Modelling of thermal field and point defect dynamics during silicon single crystal growth using CZ technique

    Science.gov (United States)

    Sabanskis, A.; Virbulis, J.

    2018-05-01

    Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.

  9. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...

  10. Markov Random Field Restoration of Point Correspondences for Active Shape Modelling

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2004-01-01

    In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...

  11. Point vortex modelling of the wake dynamics behind asymmetric vortex generator arrays

    NARCIS (Netherlands)

    Baldacchino, D.; Simao Ferreira, C.; Ragni, D.; van Bussel, G.J.W.

    2016-01-01

    In this work, we present a simple inviscid point vortex model to study the dynamics of asymmetric vortex rows, as might appear behind misaligned vortex generator vanes. Starting from the existing solution of the in_nite vortex cascade, a numerical model of four base-vortices is chosen to represent

  12. An application of a discrete fixed point theorem to the Cournot model

    OpenAIRE

    Sato, Junichi

    2008-01-01

    In this paper, we apply a discrete fixed point theorem of [7] to the Cournot model [1]. Then we can deal with the Cournot model where the production of the enterprises is discrete. To handle it, we define a discrete Cournot-Nash equilibrium, and prove its existence.

  13. Three-dimensional point-cloud room model in room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  14. Three-dimensional point-cloud room model for room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  15. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  16. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    Science.gov (United States)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  17. Model for a Ferromagnetic Quantum Critical Point in a 1D Kondo Lattice

    Science.gov (United States)

    Komijani, Yashar; Coleman, Piers

    2018-04-01

    Motivated by recent experiments, we study a quasi-one-dimensional model of a Kondo lattice with ferromagnetic coupling between the spins. Using bosonization and dynamical large-N techniques, we establish the presence of a Fermi liquid and a magnetic phase separated by a local quantum critical point, governed by the Kondo breakdown picture. Thermodynamic properties are studied and a gapless charged mode at the quantum critical point is highlighted.

  18. Sigma models in the presence of dynamical point-like defects

    International Nuclear Information System (INIS)

    Doikou, Anastasia; Karaiskos, Nikos

    2013-01-01

    Point-like Liouville integrable dynamical defects are introduced in the context of the Landau–Lifshitz and Principal Chiral (Faddeev–Reshetikhin) models. Based primarily on the underlying quadratic algebra we identify the first local integrals of motion, the associated Lax pairs as well as the relevant sewing conditions around the defect point. The involution of the integrals of motion is shown taking into account the sewing conditions.

  19. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    International Nuclear Information System (INIS)

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-01-01

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems

  20. Elastic-plastic adhesive contact of rough surfaces using n-point asperity model

    International Nuclear Information System (INIS)

    Sahoo, Prasanta; Mitra, Anirban; Saha, Kashinath

    2009-01-01

    This study considers an analysis of the elastic-plastic contact of rough surfaces in the presence of adhesion using an n-point asperity model. The multiple-point asperity model, developed by Hariri et al (2006 Trans ASME: J. Tribol. 128 505-14) is integrated into the elastic-plastic adhesive contact model developed by Roy Chowdhury and Ghosh (1994 Wear 174 9-19). This n-point asperity model differs from the conventional Greenwood and Williamson model (1966 Proc. R. Soc. Lond. A 295 300-19) in considering the asperities not as fixed entities but as those that change through the contact process, and hence it represents the asperities in a more realistic manner. The newly defined adhesion index and plasticity index defined for the n-point asperity model are used to consider the different conditions that arise because of varying load, surface and material parameters. A comparison between the load-separation behaviour of the new model and the conventional one shows a significant difference between the two depending on combinations of mean separation, adhesion index and plasticity index.

  1. A Labeling Model Based on the Region of Movability for Point-Feature Label Placement

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-09-01

    Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.

  2. Modeling and measurement of boiling point elevation during water vaporization from aqueous urea for SCR applications

    International Nuclear Information System (INIS)

    Dan, Ho Jin; Lee, Joon Sik

    2016-01-01

    Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation

  3. Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Yu, Y. H.

    2012-05-01

    During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.

  4. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  5. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  6. Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2014-01-01

    Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...

  7. Tricritical point in quantum phase transitions of the Coleman–Weinberg model at Higgs mass

    International Nuclear Information System (INIS)

    Fiolhais, Miguel C.N.; Kleinert, Hagen

    2013-01-01

    The tricritical point, which separates first and second order phase transitions in three-dimensional superconductors, is studied in the four-dimensional Coleman–Weinberg model, and the similarities as well as the differences with respect to the three-dimensional result are exhibited. The position of the tricritical point in the Coleman–Weinberg model is derived and found to be in agreement with the Thomas–Fermi approximation in the three-dimensional Ginzburg–Landau theory. From this we deduce a special role of the tricritical point for the Standard Model Higgs sector in the scope of the latest experimental results, which suggests the unexpected relevance of tricritical behavior in the electroweak interactions.

  8. Modeling and measurement of boiling point elevation during water vaporization from aqueous urea for SCR applications

    Energy Technology Data Exchange (ETDEWEB)

    Dan, Ho Jin; Lee, Joon Sik [Seoul National University, Seoul (Korea, Republic of)

    2016-03-15

    Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation.

  9. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    Science.gov (United States)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  10. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  11. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    International Nuclear Information System (INIS)

    Chigvintsev, A Yu; Iosilevskiy, I L; Noginova, L Yu

    2016-01-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z . Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval ( Z 1 < Z < Z 2 ). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941). (paper)

  12. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    Science.gov (United States)

    Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu

    2016-11-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).

  13. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  14. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  15. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    International Nuclear Information System (INIS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-01-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  16. Dynamics of Magnetic Bright Points in an Active Region

    Czech Academy of Sciences Publication Activity Database

    Möstl, C.; Hanslmeier, A.; Sobotka, Michal; Puschmann, K.G.; Muthsam, H. J.

    2006-01-01

    Roč. 237, č. 1 (2006), s. 13-23 ISSN 0038-0938 Grant - others:FWF(AT) P-17024 Institutional research plan: CEZ:AV0Z10030501 Keywords : Sun * photosphere * magnetic fields Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.887, year: 2006

  17. Reduction of bias in neutron multiplicity assay using a weighted point model

    Energy Technology Data Exchange (ETDEWEB)

    Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)

    2004-01-01

    Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the

  18. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  19. Helmholtz bright and boundary solitons

    Energy Technology Data Exchange (ETDEWEB)

    Christian, J M [Joule Physics Laboratory, School of Computing, Science and Engineering, Institute for Materials Research, University of Salford, Salford M5 4WT (United Kingdom); McDonald, G S [Joule Physics Laboratory, School of Computing, Science and Engineering, Institute for Materials Research, University of Salford, Salford M5 4WT (United Kingdom); Chamorro-Posada, P [Departmento de TeorIa de la Senal y Comunicaciones e IngenierIa Telematica, Universidad de Valladolid, ETSI Telecomunicacion, Campus Miguel Delibes s/n, 47011 Valladolid (Spain)

    2007-02-16

    We report, for the first time, exact analytical boundary solitons of a generalized cubic-quintic nonlinear Helmholtz (NLH) equation. These solutions have a linked-plateau topology that is distinct from conventional dark soliton solutions; their amplitude and intensity distributions are spatially delocalized and connect regions of finite and zero wave-field disturbances (suggesting also the classification as 'edge solitons'). Extensive numerical simulations compare the stability properties of recently derived Helmholtz bright solitons, for this type of polynomial nonlinearity, to those of the new boundary solitons. The latter are found to possess a remarkable stability characteristic, exhibiting robustness against perturbations that would otherwise lead to the destabilizing of their bright-soliton counterparts.

  20. A New Sky Brightness Monitor

    Science.gov (United States)

    Crawford, David L.; McKenna, D.

    2006-12-01

    A good estimate of sky brightness and its variations throughout the night, the months, and even the years is an essential bit of knowledge both for good observing and especially as a tool in efforts to minimize sky brightness through local action. Hence a stable and accurate monitor can be a valuable and necessary tool. We have developed such a monitor, with the financial help of Vatican Observatory and Walker Management. The device is now undergoing its Beta test in preparation for production. It is simple, accurate, well calibrated, and automatic, sending its data directly to IDA over the internet via E-mail . Approximately 50 such monitors will be ready soon for deployment worldwide including most major observatories. Those interested in having one should enquire of IDA about details.

  1. Helmholtz bright and boundary solitons

    International Nuclear Information System (INIS)

    Christian, J M; McDonald, G S; Chamorro-Posada, P

    2007-01-01

    We report, for the first time, exact analytical boundary solitons of a generalized cubic-quintic nonlinear Helmholtz (NLH) equation. These solutions have a linked-plateau topology that is distinct from conventional dark soliton solutions; their amplitude and intensity distributions are spatially delocalized and connect regions of finite and zero wave-field disturbances (suggesting also the classification as 'edge solitons'). Extensive numerical simulations compare the stability properties of recently derived Helmholtz bright solitons, for this type of polynomial nonlinearity, to those of the new boundary solitons. The latter are found to possess a remarkable stability characteristic, exhibiting robustness against perturbations that would otherwise lead to the destabilizing of their bright-soliton counterparts

  2. Giant Low Surface Brightness Galaxies

    Science.gov (United States)

    Mishra, Alka; Kantharia, Nimisha G.; Das, Mousumi

    2018-04-01

    In this paper, we present radio observations of the giant low surface brightness (LSB) galaxies made using the Giant Metrewave Radio Telescope (GMRT). LSB galaxies are generally large, dark matter dominated spirals that have low star formation efficiencies and large HI gas disks. Their properties suggest that they are less evolved compared to high surface brightness galaxies. We present GMRT emission maps of LSB galaxies with an optically-identified active nucleus. Using our radio data and archival near-infrared (2MASS) and near-ultraviolet (GALEX) data, we studied morphology and star formation efficiencies in these galaxies. All the galaxies show radio continuum emission mostly associated with the centre of the galaxy.

  3. Accurate corresponding point search using sphere-attribute-image for statistical bone model generation

    International Nuclear Information System (INIS)

    Saito, Toki; Nakajima, Yoshikazu; Sugita, Naohiko; Mitsuishi, Mamoru; Hashizume, Hiroyuki; Kuramoto, Kouichi; Nakashima, Yosio

    2011-01-01

    Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)

  4. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  5. A Spatio-Temporal Enhanced Metadata Model for Interdisciplinary Instant Point Observations in Smart Cities

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-02-01

    Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.

  6. Boiling points of halogenated ethanes: an explanatory model implicating weak intermolecular hydrogen-halogen bonding.

    Science.gov (United States)

    Beauchamp, Guy

    2008-10-23

    This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.

  7. High brightness semiconductor lasers with reduced filamentation

    DEFF Research Database (Denmark)

    McInerney, John; O'Brien, Peter.; Skovgaard, Peter M. W.

    1999-01-01

    High brightness semiconductor lasers have applications in spectroscopy, fiber lasers, manufacturing and materials processing, medicine and free space communication or energy transfer. The main difficulty associated with high brightness is that, because of COD, high power requires a large aperture...

  8. A new statistical scission-point model fed with microscopic ingredients to predict fission fragments distributions

    International Nuclear Information System (INIS)

    Heinrich, S.

    2006-01-01

    Nucleus fission process is a very complex phenomenon and, even nowadays, no realistic models describing the overall process are available. The work presented here deals with a theoretical description of fission fragments distributions in mass, charge, energy and deformation. We have reconsidered and updated the B.D. Wilking Scission Point model. Our purpose was to test if this statistic model applied at the scission point and by introducing new results of modern microscopic calculations allows to describe quantitatively the fission fragments distributions. We calculate the surface energy available at the scission point as a function of the fragments deformations. This surface is obtained from a Hartree Fock Bogoliubov microscopic calculation which guarantee a realistic description of the potential dependence on the deformation for each fragment. The statistic balance is described by the level densities of the fragment. We have tried to avoid as much as possible the input of empirical parameters in the model. Our only parameter, the distance between each fragment at the scission point, is discussed by comparison with scission configuration obtained from full dynamical microscopic calculations. Also, the comparison between our results and experimental data is very satisfying and allow us to discuss the success and limitations of our approach. We finally proposed ideas to improve the model, in particular by applying dynamical corrections. (author)

  9. Detection and localization of change points in temporal networks with the aid of stochastic block models

    Science.gov (United States)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  10. Two- and three-point functions in the D=1 matrix model

    International Nuclear Information System (INIS)

    Ben-Menahem, S.

    1991-01-01

    The critical behavior of the genus-zero two-point function in the D=1 matrix model is carefully analyzed for arbitrary embedding-space momentum. Kostov's result is recovered for momenta below a certain value P 0 (which is 1/√α' in the continuum language), with a non-universal form factor which is expressed simply in terms of the critical fermion trajectory. For momenta above P 0 , the Kostov scaling term is found to be subdominant. We then extend the large-N WKB treatment to calculate the genus-zero three-point function, and elucidate its critical behavior when all momenta are below P 0 . The resulting universal scaling behavior, as well as the non-universal form factor for the three-point function, are related to the two-point functions of the individual external momenta, through the factorization familiar from continuum conformal field theories. (orig.)

  11. Ferrimagnetism and compensation points in a decorated 3D Ising model

    International Nuclear Information System (INIS)

    Oitmaa, J.; Zheng, W.

    2003-01-01

    Full text: Ferrimagnets are materials where ions on different sublattices have opposing magnetic moments which do not exactly cancel even at zero temperature. An intriguing possibility then is the existence of a compensation point, below the Curie temperature, where the net moment changes sign. This has obvious technological significance. Most theoretical studies of such systems have used mean-field approaches, making it difficult to distinguish real properties of the model from artefacts of the approximation. For this reason a number of simpler models have been proposed, where treatments beyond mean-field theory are possible. Of particular interest are decorated systems, which can be mapped exactly onto simpler models and, in this way, either solved exactly or to a high degree of numerical precision. We use this approach to study a ferrimagnetic Ising system with spins 1/2 at the sites of a simple cubic lattice and spins S=1 or 3/2 located on the bonds. Our results, which are exact to high numerical precision, show a number of surprising and interesting features: for S=1 the possibility of zero, one or two compensation points, re-entrant behaviour, and up to three critical points; for S=3/2 always a simple critical point and zero or one compensation point

  12. The morphology and surface brightness of extragalactic jets

    International Nuclear Information System (INIS)

    Bicknell, G.V.

    1983-01-01

    The problems associated with laminar flow models are reviewed, and an analogy between laboratory jets and astrophysical jets is given. The relationship between surface brightness and the jet full width half maximum is not in general as predicted by simple magnetohydrodynamic models. An alternative turbulent model is presented

  13. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  14. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  15. Room acoustics modeling using a point-cloud representation of the room geometry

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    Room acoustics modeling is usually based on the room geometry that is parametrically described prior to a sound transmission calculation. This is a highly room-specific task and rather time consuming if a complex geometry is to be described. Here, a run time generic method for an arbitrary room...... geometry acquisition is presented. The method exploits a depth sensor of the Kinect device that provides a point based information of a scanned room interior. After post-processing of the Kinect output data, a 3D point-cloud model of the room is obtained. Sound transmission between two selected points...... level of user immersion by a real time acoustical simulation of a dynamic scenes....

  16. Does low surface brightness mean low density?

    NARCIS (Netherlands)

    deBlok, WJG; McGaugh, SS

    1996-01-01

    We compare the dynamical properties of two galaxies at identical positions on the Tully-Fisher relation, but with different surface brightnesses. We find that the low surface brightness galaxy UGC 128 has a higher mass-to-light ratio, and yet has lower mass densities than the high surface brightness

  17. High-brightness rf linear accelerators

    International Nuclear Information System (INIS)

    Jameson, R.A.

    1986-01-01

    The issue of high brightness and its ramifications in linacs driven by radio-frequency fields is discussed. A history of the RF linacs is reviewed briefly. Some current applications are then examined that are driving progress in RF linacs. The physics affecting the brightness of RF linacs is then discussed, followed by the economic feasibility of higher brightness machines

  18. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

    DEFF Research Database (Denmark)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L

    2016-01-01

    on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network......With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...

  19. SIMPLE MODELS OF THREE COUPLED PT -SYMMETRIC WAVE GUIDES ALLOWING FOR THIRD-ORDER EXCEPTIONAL POINTS

    Directory of Open Access Journals (Sweden)

    Jan Schnabel

    2017-12-01

    Full Text Available We study theoretical models of three coupled wave guides with a PT-symmetric distribution of gain and loss. A realistic matrix model is developed in terms of a three-mode expansion. By comparing with a previously postulated matrix model it is shown how parameter ranges with good prospects of finding a third-order exceptional point (EP3 in an experimentally feasible arrangement of semiconductors can be determined. In addition it is demonstrated that continuous distributions of exceptional points, which render the discovery of the EP3 difficult, are not only a feature of extended wave guides but appear also in an idealised model of infinitely thin guides shaped by delta functions.

  20. MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.

  1. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  2. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  3. Numerical Solution of Fractional Neutron Point Kinetics Model in Nuclear Reactor

    Directory of Open Access Journals (Sweden)

    Nowak Tomasz Karol

    2014-06-01

    Full Text Available This paper presents results concerning solutions of the fractional neutron point kinetics model for a nuclear reactor. Proposed model consists of a bilinear system of fractional and ordinary differential equations. Three methods to solve the model are presented and compared. The first one entails application of discrete Grünwald-Letnikov definition of the fractional derivative in the model. Second involves building an analog scheme in the FOMCON Toolbox in MATLAB environment. Third is the method proposed by Edwards. The impact of selected parameters on the model’s response was examined. The results for typical input were discussed and compared.

  4. BRITE-Constellation: Nanosatellites for precision photometry of bright stars

    Science.gov (United States)

    Weiss, W. W.; Moffat, A. F. J.; Schwarzenberg-Czerny, A.; Koudelka, O. F.; Grant, C. C.; Zee, R. E.; Kuschnig, R.; Mochnacki, St.; Rucinski, S. M.; Matthews, J. M.; Orleański, P.; Pamyatnykh, A. A.; Pigulski, A.; Alves, J.; Guedel, M.; Handler, G.; Wade, G. A.; Scholtz, A. L.; Scholtz

    2014-02-01

    BRITE-Constellation (where BRITE stands for BRIght Target Explorer) is an international nanosatellite mission to monitor photometrically, in two colours, brightness and temperature variations of stars brighter than V ~ 4, with precision and time coverage not possible from the ground. The current mission design consists of three pairs of 7 kg nanosats (hence ``Constellation'') from Austria, Canada and Poland carrying optical telescopes (3 cm aperture) and CCDs. One instrument in each pair is equipped with a blue filter; the other, a red filter. The first two nanosats (funded by Austria) are UniBRITE, designed and built by UTIAS-SFL (University of Toronto Institute for Aerospace Studies-Space Flight Laboratory) and its twin, BRITE-Austria, built by the Technical University Graz (TUG) with support of UTIAS-SFL. They were launched on 25 February 2013 by the Indian Space Agency, under contract to the Canadian Space Agency. Each BRITE instrument has a wide field of view (~ 24 degrees), so up to 15 bright stars can be observed simultaneously in 32 × 32 sub-rasters. Photometry (with reduced precision but thorough time sampling) of additional fainter targets will be possible through on-board data processing. A critical technical element of the BRITE mission is the three-axis attitude control system to stabilize a nanosat with very low inertia. The pointing stability is better than 1.5 arcminutes rms, a significant advance by UTIAS-SFL over any previous nanosatellite. BRITE-Constellation will primarily measure p- and g-mode pulsations to probe the interiors and ages of stars through asteroseismology. The BRITE sample of many of the brightest stars in the night sky is dominated by the most intrinsically luminous stars: massive stars seen at all evolutionary stages, and evolved medium-mass stars at the very end of their nuclear burning phases (cool giants and AGB stars). The Hertzsprung-Russell diagram for stars brighter than mag V=4 from which the BRITE-Constellation sample

  5. Soft modes at the critical end point in the chiral effective models

    International Nuclear Information System (INIS)

    Fujii, Hirotsugu; Ohtani, Munehisa

    2004-01-01

    At the critical end point in QCD phase diagram, the scalar, vector and entropy susceptibilities are known to diverge. The dynamic origin of this divergence is identified within the chiral effective models as softening of a hydrodynamic mode of the particle-hole-type motion, which is a consequence of the conservation law of the baryon number and the energy. (author)

  6. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  7. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...

  8. Using many pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...

  9. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Science.gov (United States)

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  10. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    Science.gov (United States)

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  11. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  12. TARDEC FIXED HEEL POINT (FHP): DRIVER CAD ACCOMMODATION MODEL VERIFICATION REPORT

    Science.gov (United States)

    2017-11-09

    Public Release Disclaimer: Reference herein to any specific commercial company, product , process, or service by trade name, trademark, manufacturer , or...not actively engaged HSI until MSB or the Engineering Manufacturing and Development (EMD) Phase, resulting in significant design and cost changes...and shall not be used for advertising or product endorsement purposes. TARDEC Fixed Heel Point (FHP): Driver CAD Accommodation Model Verification

  13. On Lie point symmetry of classical Wess-Zumino-Witten model

    International Nuclear Information System (INIS)

    Maharana, Karmadeva

    2001-06-01

    We perform the group analysis of Witten's equations of motion for a particle moving in the presence of a magnetic monopole, and also when constrained to move on the surface of a sphere, which is the classical example of Wess-Zumino-Witten model. We also consider variations of this model. Our analysis gives the generators of the corresponding Lie point symmetries. The Lie symmetry corresponding to Kepler's third law is obtained in two related examples. (author)

  14. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  15. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  16. Considerations for high-brightness electron sources

    International Nuclear Information System (INIS)

    Jameson, R.A.

    1990-01-01

    Particle accelerators are now used in many areas of physics research and in industrial and medical applications. New uses are being studied to address major societal needs in energy production, materials research, generation of intense beams of radiation at optical and suboptical wavelengths, treatment of various kinds of waste, and so on. Many of these modern applications require a high intensity beam at the desired energy, along with a very good beam quality in terms of the beam confinement, aiming, or focusing. Considerations for ion and electron accelerators are often different, but there are also many commonalties, and in fact, techniques derived for one should perhaps more often be considered for the other as well. We discuss some aspects of high-brightness electron sources here from that point of view. 6 refs

  17. Brightness distribution data on 2918 radio sources at 365 MHz

    International Nuclear Information System (INIS)

    Cotton, W.D.; Owen, F.N.; Ghigo, F.D.

    1975-01-01

    This paper is the second in a series describing the results of a program attempting to fit models of the brightness distribution to radio sources observed at 365 MHz with the Bandwidth Synthesis Interferometer (BSI) operated by the University of Texas Radio Astronomy Observatory. Results for a further 2918 radio sources are given. An unresolved model and three symmetric extended models with angular sizes in the range 10--70 arcsec were attempted for each radio source. In addition, for 348 sources for which other observations of brightness distribution are published, the reference to the observations and a brief description are included

  18. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2011-11-01

    Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.

  19. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    Science.gov (United States)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  20. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  1. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    Science.gov (United States)

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  2. Improved cost models for optimizing CO2 pipeline configuration for point-to-point pipelines and simple networks

    NARCIS (Netherlands)

    Knoope, M. M. J.|info:eu-repo/dai/nl/364248149; Guijt, W.; Ramirez, A.|info:eu-repo/dai/nl/284852414; Faaij, A. P. C.

    In this study, a new cost model is developed for CO2 pipeline transport, which starts with the physical properties of CO2 transport and includes different kinds of steel grades and up-to-date material and construction costs. This pipeline cost model is used for a new developed tool to determine the

  3. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    Energy Technology Data Exchange (ETDEWEB)

    Omar, M.S., E-mail: dr_m_s_omar@yahoo.com [Department of Physics, College of Science, University of Salahaddin-Erbil, Arbil, Kurdistan (Iraq)

    2012-11-15

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to that of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.

  4. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    International Nuclear Information System (INIS)

    Omar, M.S.

    2012-01-01

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to that of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å 3 for bulk to 57 Å 3 for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10 −6 K −1 for a bulk crystal down to a minimum value of 0.1 × 10 −6 K −1 for a 6 nm diameter nanoparticle.

  5. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    Science.gov (United States)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  6. Investigating the Bright End of LSST Photometry

    Science.gov (United States)

    Ojala, Elle; Pepper, Joshua; LSST Collaboration

    2018-01-01

    The Large Synoptic Survey Telescope (LSST) will begin operations in 2022, conducting a wide-field, synoptic multiband survey of the southern sky. Some fraction of objects at the bright end of the magnitude regime observed by LSST will overlap with other wide-sky surveys, allowing for calibration and cross-checking between surveys. The LSST is optimized for observations of very faint objects, so much of this data overlap will be comprised of saturated images. This project provides the first in-depth analysis of saturation in LSST images. Using the PhoSim package to create simulated LSST images, we evaluate saturation properties of several types of stars to determine the brightness limitations of LSST. We also collect metadata from many wide-field photometric surveys to provide cross-survey accounting and comparison. Additionally, we evaluate the accuracy of the PhoSim modeling parameters to determine the reliability of the software. These efforts will allow us to determine the expected useable data overlap between bright-end LSST images and faint-end images in other wide-sky surveys. Our next steps are developing methods to extract photometry from saturated images.This material is based upon work supported in part by the National Science Foundation through Cooperative Agreement 1258333 managed by the Association of Universities for Research in Astronomy (AURA), and the Department of Energy under Contract No. DE-AC02-76SF00515 with the SLAC National Accelerator Laboratory. Additional LSST funding comes from private donations, grants to universities, and in-kind support from LSSTC Institutional Members.Thanks to NSF grant PHY-135195 and the 2017 LSSTC Grant Award #2017-UG06 for making this project possible.

  7. Interior Point Methods on GPU with application to Model Predictive Control

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog

    The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...

  8. The Investigation of Accuracy of 3 Dimensional Models Generated From Point Clouds with Terrestrial Laser Scanning

    Science.gov (United States)

    Gumus, Kutalmis; Erkaya, Halil

    2013-04-01

    In Terrestrial laser scanning (TLS) applications, it is necessary to take into consideration the conditions that affect the scanning process, especially the general characteristics of the laser scanner, geometric properties of the scanned object (shape, size, etc.), and its spatial location in the environment. Three dimensional models obtained with TLS, allow determining the geometric features and relevant magnitudes of the scanned object in an indirect way. In order to compare the spatial location and geometric accuracy of the 3-dimensional model created by Terrestrial laser scanning, it is necessary to use measurement tools that give more precise results than TLS. Geometric comparisons are performed by analyzing the differences between the distances, the angles between surfaces and the measured values taken from cross-sections between the data from the 3-dimensional model created with TLS and the values measured by other measurement devices The performance of the scanners, the size and shape of the scanned objects are tested using reference objects the sizes of which are determined with high precision. In this study, the important points to consider when choosing reference objects were highlighted. The steps up to processing the point clouds collected by scanning, regularizing these points and modeling in 3 dimensions was presented visually. In order to test the geometric correctness of the models obtained by Terrestrial laser scanners, sample objects with simple geometric shapes such as cubes, rectangular prisms and cylinders that are made of concrete were used as reference models. Three dimensional models were generated by scanning these reference models with Trimble Mensi GS 100. The dimension of the 3D model that is created from point clouds was compared with the precisely measured dimensions of the reference objects. For this purpose, horizontal and vertical cross-sections were taken from the reference objects and generated 3D models and the proximity of

  9. Brightness-normalized Partial Least Squares Regression for hyperspectral data

    International Nuclear Information System (INIS)

    Feilhauer, Hannes; Asner, Gregory P.; Martin, Roberta E.; Schmidtlein, Sebastian

    2010-01-01

    Developed in the field of chemometrics, Partial Least Squares Regression (PLSR) has become an established technique in vegetation remote sensing. PLSR was primarily designed for laboratory analysis of prepared material samples. Under field conditions in vegetation remote sensing, the performance of the technique may be negatively affected by differences in brightness due to amount and orientation of plant tissues in canopies or the observing conditions. To minimize these effects, we introduced brightness normalization to the PLSR approach and tested whether this modification improves the performance under changing canopy and observing conditions. This test was carried out using high-fidelity spectral data (400-2510 nm) to model observed leaf chemistry. The spectral data was combined with a canopy radiative transfer model to simulate effects of varying canopy structure and viewing geometry. Brightness normalization enhanced the performance of PLSR by dampening the effects of canopy shade, thus providing a significant improvement in predictions of leaf chemistry (up to 3.6% additional explained variance in validation) compared to conventional PLSR. Little improvement was made on effects due to variable leaf area index, while minor improvement (mostly not significant) was observed for effects of variable viewing geometry. In general, brightness normalization increased the stability of model fits and regression coefficients for all canopy scenarios. Brightness-normalized PLSR is thus a promising approach for application on airborne and space-based imaging spectrometer data.

  10. A semi-analytical stationary model of a point-to-plane corona discharge

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F

    2012-01-01

    A semi-analytical model of a dc corona discharge is formulated to determine the spatial distribution of charged particles (electrons, negative ions and positive ions) and the electric field in pure oxygen using a point-to-plane electrode system. A key point in the modeling is the integration of Gauss' law and the continuity equation of charged species along the electric field lines, and the use of Warburg's law and the corona current–voltage characteristics as input data in the boundary conditions. The electric field distribution predicted by the model is compared with the numerical solution obtained using a finite-element technique. The semi-analytical solutions are obtained at a negligible computational cost, and provide useful information to characterize and control the corona discharge in different technological applications. (paper)

  11. An analytical model for predicting dryout point in bilaterally heated vertical narrow annuli

    International Nuclear Information System (INIS)

    Aye Myint; Tian Wenxi; Jia Dounan; Li Zhihui, Li Hao

    2005-02-01

    Based on the the droplet-diffusion model by Kirillov and Smogalev (1969, 1972), a new analytical model of dryout point prediction in the steam-water flow for bilaterally and uniformly heated narrow annular gap was developed. Comparison of the present model predictions with experimental results indicated that a good agreement in accuracy for the experimental parametric range (pressure from 0.8 to 3.5 MPa, mass flux of 60.39 to 135.6 kg· -2 ·s -1 and the heat flus of 50 kW·m -2 . Prediction of dryout point was experimentally investigated with deionized water upflowing through narrow annular channel with 1.0 mm and 1.5 mm gap heated by AC power supply. (author)

  12. An IFC schema extension and binary serialization format to efficiently integrate point cloud data into building models

    NARCIS (Netherlands)

    Krijnen, T.F.; Beetz, J.

    2017-01-01

    In this paper we suggest an extension to the Industry Foundation Classes (IFC) model to integrate point cloud datasets. The proposal includes a schema extension to the core model allowing the storage of points, either as Cartesian coordinates, points in parametric space of associated building

  13. Low surface brightness spiral galaxies

    International Nuclear Information System (INIS)

    Romanishin, W.

    1980-01-01

    This dissertation presents an observational overview of a sample of low surface brightness (LSB) spiral galaxies. The sample galaxies were chosen to have low surface brightness disks and indications of spiral structure visible on the Palomar Sky Survey. They are of sufficient angular size (diameter > 2.5 arcmin), to allow detailed surface photometry using Mayall 4-m prime focus plates. The major findings of this dissertation are: (1) The average disk central surface brightness of the LSB galaxies is 22.88 magnitude/arcsec 2 in the B passband. (2) From broadband color measurements of the old stellar population, we infer a low average stellar metallicity, on the order of 1/5 solar. (3) The spectra and optical colors of the HII regions in the LSB galaxies indicate a lack of hot ionizing stars compared to HII regions in other late-type galaxies. (4) The average surface mass density, measured within the radius containing half the total mass, is less than half that of a sample of normal late-type spirals. (5) The average LSB galaxy neutral hydrogen mass to blue luminosity ratio is about 0.6, significantly higher than in a sample of normal late-type galaxies. (6) We find no conclusive evidence of an abnormal mass-to-light ratio in the LSB galaxies. (7) Some of the LSB galaxies exhibit well-developed density wave patterns. (8) A very crude calculation shows the lower metallicity of the LSB galaxies compared with normal late-type spirals might be explained simply by the deficiency of massive stars in the LSB galaxies

  14. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    Science.gov (United States)

    2016-04-01

    AND ROTORCRAFT FROM DISCRETE -POINT LINEAR MODELS Eric L. Tobias and Mark B. Tischler Aviation Development Directorate Aviation and Missile...Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete -Point Linear Models 5...of discrete -point linear models and trim data. The model stitching simulation architecture is applicable to any aircraft configuration readily

  15. Tolerance of image enhancement brightness and contrast in lateral cephalometric digital radiography for Steiner analysis

    Science.gov (United States)

    Rianti, R. A.; Priaminiarti, M.; Syahraini, S. I.

    2017-08-01

    Image enhancement brightness and contrast can be adjusted on lateral cephalometric digital radiographs to improve image quality and anatomic landmarks for measurement by Steiner analysis. To determine the limit value for adjustments of image enhancement brightness and contrast in lateral cephalometric digital radiography for Steiner analysis. Image enhancement brightness and contrast were adjusted on 100 lateral cephalometric radiography in 10-point increments (-30, -20, -10, 0, +10, +20, +30). Steiner analysis measurements were then performed by two observers. Reliabilities were tested by the Interclass Correlation Coefficient (ICC) and significance tested by ANOVA or the Kruskal Wallis test. No significant differences were detected in lateral cephalometric analysis measurements following adjustment of the image enhancement brightness and contrast. The limit value of adjustments of the image enhancement brightness and contrast associated with incremental 10-point changes (-30, -20, -10, 0, +10, +20, +30) does not affect the results of Steiner analysis.

  16. High brightness beams and applications

    International Nuclear Information System (INIS)

    Sheffield, R.L.

    1995-01-01

    This paper describes the present research on attaining intense bright electron beams. Thermionic systems are briefly covered. Recent and past results from the photoinjector programs are given. The performance advantages and difficulties presently faced by researchers using photoinjectors is discussed. The progress that has been made in photocathode materials, both in lifetime and quantum efficiency, is covered. Finally, a discussion of emittance measurements of photoinjector systems and how the measurement is complicated by the non-thermal nature of the electron beam is presented

  17. High-brightness electron injectors

    International Nuclear Information System (INIS)

    Sheffield, R.L.

    1987-01-01

    Free-electron laser (FEL) oscillators and synchrotron light sources require pulse trains of high peak brightness and, in some applications, high-average power. Recent developments in the technology of photoemissive and thermionic electron sources in rf cavities for electron-linac injector applications offer promising advances over conventional electron injectors. Reduced emittance growth in high peak-current electron injectors may be achieved by using high field strengths and by linearizing the radial component of the cavity electric field at the expense of lower shunt impedance

  18. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  19. DNA denaturation through a model of the partition points on a one-dimensional lattice

    International Nuclear Information System (INIS)

    Mejdani, R.; Huseini, H.

    1994-08-01

    We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology. (author). 29 refs, 4 figs

  20. EBT time-dependent point model code: description and user's guide

    International Nuclear Information System (INIS)

    Roberts, J.F.; Uckan, N.A.

    1977-07-01

    A D-T time-dependent point model has been developed to assess the energy balance in an EBT reactor plasma. Flexibility is retained in the model to permit more recent data to be incorporated as they become available from the theoretical and experimental studies. This report includes the physics models involved, the program logic, and a description of the variables and routines used. All the files necessary for execution are listed, and the code, including a post-execution plotting routine, is discussed

  1. Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    uma srivastava

    2012-01-01

    Full Text Available The paper deals with estimating  shift point which occurs in any sequence of independent observations  of Poisson model in statistical process control. This shift point occurs in the sequence when  i.e. m  life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with  R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .

  2. A surface brightness analysis of eight RR Lyrae stars

    International Nuclear Information System (INIS)

    Hawley, S.L.; Barnes, T.G. III; Moffett, T.J.

    1987-01-01

    The authors have used a surface brightness, (V-R) relation to analyze new contemporaneous photometry and radial velocity data for 6 RR-ab type stars and to re-analyze previously published data for RR Lyrae and X Arietis. Systematic effects were found in the surface brightness at phases near minimum radius. Excluding these phases, they determine the slope of the surface brightness relation and the mean radius for each star. They also find a zero point which includes both a distance term and the zero point of the surface brightness relation. The sample includes stars with Preston's metallicity indicator ΔS = 0 to 9, with periods ranging from 0.397 days to 0.651 days. Their results indicate a log(R/R solar ) vs. log P relation in the sense that stars with longer periods have larger radii, in agreement with theoretical predictions. Their radii are consistent with bolometric magnitudes in the range 0.2 - 0.8 magnitude but accurate magnitudes must await a reliable T e - color calibration

  3. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    Science.gov (United States)

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  5. Improved equivalent magnetic network modeling for analyzing working points of PMs in interior permanent magnet machine

    Science.gov (United States)

    Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna

    2018-05-01

    As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.

  6. Determination of the Number of Fixture Locating Points for Sheet Metal By Grey Model

    Directory of Open Access Journals (Sweden)

    Yang Bo

    2017-01-01

    Full Text Available In the process of the traditional fixture design for sheet metal part based on the "N-2-1" locating principle, the number of fixture locating points is determined by trial and error or the experience of the designer. To that end, a new design method based on grey theory is proposed to determine the number of sheet metal fixture locating points in this paper. Firstly, the training sample set is generated by Latin hypercube sampling (LHS and finite element analysis (FEA. Secondly, the GM(1, 1 grey model is constructed based on the established training sample set to approximate the mapping relationship between the number of fixture locating points and the concerned sheet metal maximum deformation. Thirdly, the final number of fixture locating points for sheet metal can be inversely calculated under the allowable maximum deformation. Finally, a sheet metal case is conducted and the results indicate that the proposed approach is effective and efficient in determining the number of fixture locating points for sheet metal.

  7. Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.

    Science.gov (United States)

    Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A

    2018-04-01

    Diarrhea is the second leading cause of death in children confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.

  8. Study on Meshfree Hermite Radial Point Interpolation Method for Flexural Wave Propagation Modeling and Damage Quantification

    Directory of Open Access Journals (Sweden)

    Hosein Ghaffarzadeh

    Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.

  9. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.

    Science.gov (United States)

    Afifi, Akram; El-Rabbany, Ahmed

    2015-06-19

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.

  10. Asymptotic behaviour of two-point functions in multi-species models

    Directory of Open Access Journals (Sweden)

    Karol K. Kozlowski

    2016-05-01

    Full Text Available We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU(3-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  11. Publicly available models to predict normal boiling point of organic compounds

    International Nuclear Information System (INIS)

    Oprisiu, Ioana; Marcou, Gilles; Horvath, Dragos; Brunel, Damien Bernard; Rivollet, Fabien; Varnek, Alexandre

    2013-01-01

    Quantitative structure–property models to predict the normal boiling point (T b ) of organic compounds were developed using non-linear ASNNs (associative neural networks) as well as multiple linear regression – ISIDA-MLR and SQS (stochastic QSAR sampler). Models were built on a diverse set of 2098 organic compounds with T b varying in the range of 185–491 K. In ISIDA-MLR and ASNN calculations, fragment descriptors were used, whereas fragment, FPTs (fuzzy pharmacophore triplets), and ChemAxon descriptors were employed in SQS models. Prediction quality of the models has been assessed in 5-fold cross validation. Obtained models were implemented in the on-line ISIDA predictor at (http://infochim.u-strasbg.fr/webserv/VSEngine.html)

  12. Instantaneous nonlinear assessment of complex cardiovascular dynamics by Laguerre-Volterra point process models.

    Science.gov (United States)

    Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo

    2013-01-01

    We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.

  13. Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas

    Science.gov (United States)

    Bahrami, Parviz A.

    2012-01-01

    A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.

  14. Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies

    Science.gov (United States)

    Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger

    2018-02-01

    Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.

  15. Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies.

    Science.gov (United States)

    Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger

    2018-02-01

    Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995)JSTPBS0022-471510.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012)PLEEE81539-375510.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.

  16. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  17. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Science.gov (United States)

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  18. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  19. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    International Nuclear Information System (INIS)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined

  20. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  1. Analysis of divertor asymmetry using a simple five-point model

    International Nuclear Information System (INIS)

    Hayashi, Nobuhiko; Takizuka, Tomonori; Hatayama, Akiyoshi; Ogasawara, Masatada.

    1997-03-01

    A simple five-point model of the scrape-off layer (SOL) plasma outside the separatrix of a diverted tokamak has been developed to study the inside/outside divertor asymmetry. The SOL current, gas pumping/puffing in the divertor region, and divertor plate biasing are included in this model. Gas pumping/puffing and biasing are shown to control divertor asymmetry. In addition, the SOL current is found to form asymmetric solutions without external controls of gas pumping/puffing and biasing. (author)

  2. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  3. Kinetic model for electric-field induced point defect redistribution near semiconductor surfaces

    Science.gov (United States)

    Gorai, Prashun; Seebauer, Edmund G.

    2014-07-01

    The spatial distribution of point defects near semiconductor surfaces affects the efficiency of devices. Near-surface band bending generates electric fields that influence the spatial redistribution of charged mobile defects that exchange infrequently with the lattice, as recently demonstrated for pile-up of isotopic oxygen near rutile TiO2 (110). The present work derives a mathematical model to describe such redistribution and establishes its temporal dependence on defect injection rate and band bending. The model shows that band bending of only a few meV induces significant redistribution, and that the direction of the electric field governs formation of either a valley or a pile-up.

  4. Kinetic model for electric-field induced point defect redistribution near semiconductor surfaces

    International Nuclear Information System (INIS)

    Gorai, Prashun; Seebauer, Edmund G.

    2014-01-01

    The spatial distribution of point defects near semiconductor surfaces affects the efficiency of devices. Near-surface band bending generates electric fields that influence the spatial redistribution of charged mobile defects that exchange infrequently with the lattice, as recently demonstrated for pile-up of isotopic oxygen near rutile TiO 2 (110). The present work derives a mathematical model to describe such redistribution and establishes its temporal dependence on defect injection rate and band bending. The model shows that band bending of only a few meV induces significant redistribution, and that the direction of the electric field governs formation of either a valley or a pile-up.

  5. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    Science.gov (United States)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the

  6. Schottky’s conjecture, field emitters, and the point charge model

    Directory of Open Access Journals (Sweden)

    Kevin L. Jensen

    2016-06-01

    Full Text Available A Point Charge Model of conical field emitters, in which the emitter is defined by an equipotential surface of judiciously placed charges over a planar conductor, is used to confirm Schottky’s conjecture that field enhancement factors are multiplicative for a small protrusion placed on top of a larger base structure. Importantly, it is shown that Schottky’s conjecture for conical / ellipsoidal field emitters remains unexpectedly valid even when the dimensions of the protrusion begin to approach the dimensions of the base structure. The model is analytic and therefore the methodology is extensible to other configurations.

  7. The environmental zero-point problem in evolutionary reaction norm modeling.

    Science.gov (United States)

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  8. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  9. Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling

    Science.gov (United States)

    Kaur, Amandeep; Singh Mann, Kulwinder, Dr.

    2018-01-01

    Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.

  10. Spacing distribution functions for 1D point island model with irreversible attachment

    Science.gov (United States)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  11. Ideal point error for model assessment in data-driven river flow forecasting

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2012-08-01

    Full Text Available When analysing the performance of hydrological models in river forecasting, researchers use a number of diverse statistics. Although some statistics appear to be used more regularly in such analyses than others, there is a distinct lack of consistency in evaluation, making studies undertaken by different authors or performed at different locations difficult to compare in a meaningful manner. Moreover, even within individual reported case studies, substantial contradictions are found to occur between one measure of performance and another. In this paper we examine the ideal point error (IPE metric – a recently introduced measure of model performance that integrates a number of recognised metrics in a logical way. Having a single, integrated measure of performance is appealing as it should permit more straightforward model inter-comparisons. However, this is reliant on a transferrable standardisation of the individual metrics that are combined to form the IPE. This paper examines one potential option for standardisation: the use of naive model benchmarking.

  12. A Dynamic Model of the Environmental Kuznets Curve. Turning Point and Public Policy

    Energy Technology Data Exchange (ETDEWEB)

    Egli, H.; Steger, T.M. [CER-ETH Center of Economic Research at ETH Zurich, ZUE F 10, CH-8092 Zurich (Switzerland)

    2007-01-15

    We set up a simple dynamic macroeconomic model with (1) polluting consumption and a preference for a clean environment, (2) increasing returns in abatement giving rise to an EKC and (3) sustained growth resulting from a linear final-output technology. There are two sorts of market failures caused by external effects associated with consumption and environmental effort. The model is employed to investigate the determinants of the turning point and the cost effectiveness of different public policies aimed at a reduction of the environmental burden. Moreover, the model offers a potential explanation of an N-shaped pollution-income relation. It is shown that the model is compatible with most empirical regularities on economic growth and the environment.

  13. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    Science.gov (United States)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  14. The lowest surface brightness disc galaxy known

    International Nuclear Information System (INIS)

    Davies, J.I.; Phillipps, S.; Disney, M.J.

    1988-01-01

    The discovery of a galaxy with a prominent bulge and a dominant extremely low surface brightness disc component is reported. The profile of this galaxy is very similar to the recently discovered giant low surface brightness galaxy Malin 1. The disc central surface brightness is found to be ∼ 26.4 Rμ, some 1.5 mag fainter than Malin 1 and thus by far the lowest yet observed. (author)

  15. Increasing the Brightness of Light Sources

    OpenAIRE

    Fu, Ling

    2006-01-01

    In modern illumination systems, compact size and high brightness are important features. Light recycling allows an increase of the spectral radiance (brightness) emitted by a light source for the price of reducing the total radiant power. Light recycling means returning part of the emitted light to the source where part of it will escape absorption. As a result, the output brightness can be increased in a restricted phase space, ...

  16. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    Science.gov (United States)

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  17. Customer Order Decoupling Point Selection Model in Mass Customization Based on MAS

    Institute of Scientific and Technical Information of China (English)

    XU Xuanguo; LI Xiangyang

    2006-01-01

    Mass customization relates to the ability of providing individually designed products or services to customer with high process flexibility or integration. Literatures on mass customization have been focused on mechanism of MC, but little on customer order decoupling point selection. The aim of this paper is to present a model for customer order decoupling point selection of domain knowledge interactions between enterprises and customers in mass customization. Based on the analysis of other researchers' achievements combining the demand problems of customer and enterprise, a model of group decision for customer order decoupling point selection is constructed based on quality function deployment and multi-agent system. Considering relatively the decision makers of independent functional departments as independent decision agents, a decision agent set is added as the third dimensionality to house of quality, the cubic quality function deployment is formed. The decision-making can be consisted of two procedures: the first one is to build each plane house of quality in various functional departments to express each opinions; the other is to evaluate and gather the foregoing sub-decisions by a new plane quality function deployment. Thus, department decision-making can well use its domain knowledge by ontology, and total decision-making can keep simple by avoiding too many customer requirements.

  18. The monodromy property for K3 surfaces allowing a triple-point-free model

    DEFF Research Database (Denmark)

    Jaspers, Annelies Kristien J

    2017-01-01

    The aim of this thesis is to study under which conditions K3 surfaces allowing a triple-point-free model satisfy the monodromy property. This property is a quantitative relation between the geometry of the degeneration of a Calabi-Yau variety X and the monodromy action on the cohomology of...... X: a Calabi- Yau variety X satisfies the monodromy property if poles of the motivic zeta function ZX,ω(T) induce monodromy eigenvalues on the cohomology of X. Let k be an algebraically closed field of characteristic 0, and set K = k((t)). In this thesis, we focus on K3 surfaces over K allowing a triple-point...... is very precise, which allows to use a combination of geometrical and combinatorial techniques to check the monodromy property in practice. The first main result is an explicit computation of the poles of ZX,ω(T) for a K3 surface X allowing a triple-point-free model and a volume form ! on X. We show that...

  19. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  20. Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.

    Science.gov (United States)

    Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R

    2013-01-02

    The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.

  1. Using a dynamic point-source percolation model to simulate bubble growth

    International Nuclear Information System (INIS)

    Zimmerman, Jonathan A.; Zeigler, David A.; Cowgill, Donald F.

    2004-01-01

    Accurate modeling of nucleation, growth and clustering of helium bubbles within metal tritide alloys is of high scientific and technological importance. Of interest is the ability to predict both the distribution of these bubbles and the manner in which these bubbles interact at a critical concentration of helium-to-metal atoms to produce an accelerated release of helium gas. One technique that has been used in the past to model these materials, and again revisited in this research, is percolation theory. Previous efforts have used classical percolation theory to qualitatively and quantitatively model the behavior of interstitial helium atoms in a metal tritide lattice; however, higher fidelity models are needed to predict the distribution of helium bubbles and include features that capture the underlying physical mechanisms present in these materials. In this work, we enhance classical percolation theory by developing the dynamic point-source percolation model. This model alters the traditionally binary character of site occupation probabilities by enabling them to vary depending on proximity to existing occupied sites, i.e. nucleated bubbles. This revised model produces characteristics for one and two dimensional systems that are extremely comparable with measurements from three dimensional physical samples. Future directions for continued development of the dynamic model are also outlined

  2. Soil hydraulic parameters and surface soil moisture of a tilled bare soil plot inversely derived from l-band brightness temperatures

    KAUST Repository

    Dimitrov, Marin; Vanderborght, Jan P.; Kostov, K. G.; Jadoon, Khan; Weihermü ller, Lutz; Jackson, Thomas J.; Bindlish, Rajat; Pachepsky, Ya A.; Schwank, Mike; Vereecken, Harry

    2014-01-01

    model (CRTM) that accounts for vertical gradients in dielectric permittivity. Brightness temperatures simulated by the CRTM and the 2-cm-layer Fresnel model fitted well to the measured ones. L-band brightness temperatures are therefore related

  3. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere

    Science.gov (United States)

    Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  4. AUTOMATED VOXEL MODEL FROM POINT CLOUDS FOR STRUCTURAL ANALYSIS OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    G. Bitelli

    2016-06-01

    Full Text Available In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy that was hit by an earthquake in 2012.

  5. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    Science.gov (United States)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  6. Low dimensional neutron moderators for enhanced source brightness

    DEFF Research Database (Denmark)

    Mezei, Ferenc; Zanini, Luca; Takibayev, Alan

    2014-01-01

    In a recent numerical optimization study we have found that liquid para-hydrogen coupled cold neutron moderators deliver 3–5 times higher cold neutron brightness at a spallation neutron source if they take the form of a flat, quasi 2-dimensional disc, in contrast to the conventional more voluminous...... for cold neutrons. This model leads to the conclusions that the optimal shape for high brightness para-hydrogen neutron moderators is the quasi 1-dimensional tube and these low dimensional moderators can also deliver much enhanced cold neutron brightness in fission reactor neutron sources, compared...... to the much more voluminous liquid D2 or H2 moderators currently used. Neutronic simulation calculations confirm both of these theoretical conclusions....

  7. Once more on the equilibrium-point hypothesis (lambda model) for motor control.

    Science.gov (United States)

    Feldman, A G

    1986-03-01

    The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.

  8. The three-point correlation function of the cosmic microwave background in inflationary models

    CERN Document Server

    Gangui, Alejandro; Matarrese, Sabino; Mollerach, Silvia

    1994-01-01

    We analyze the temperature three-point correlation function and the skewness of the Cosmic Microwave Background (CMB), providing general relations in terms of multipole coefficients. We then focus on applications to large angular scale anisotropies, such as those measured by the {\\em COBE} DMR, calculating the contribution to these quantities from primordial, inflation generated, scalar perturbations, via the Sachs--Wolfe effect. Using the techniques of stochastic inflation we are able to provide a {\\it universal} expression for the ensemble averaged three-point function and for the corresponding skewness, which accounts for all primordial second-order effects. These general expressions would moreover apply to any situation where the bispectrum of the primordial gravitational potential has a {\\em hierarchical} form. Our results are then specialized to a number of relevant models: power-law inflation driven by an exponential potential, chaotic inflation with a quartic and quadratic potential and a particular c...

  9. Classical dynamics of the Abelian Higgs model from the critical point and beyond

    Directory of Open Access Journals (Sweden)

    G.C. Katsimiga

    2015-09-01

    Full Text Available We present two different families of solutions of the U(1-Higgs model in a (1+1 dimensional setting leading to a localization of the gauge field. First we consider a uniform background (the usual vacuum, which corresponds to the fully higgsed-superconducting phase. Then we study the case of a non-uniform background in the form of a domain wall which could be relevantly close to the critical point of the associated spontaneous symmetry breaking. For both cases we obtain approximate analytical nodeless and nodal solutions for the gauge field resulting as bound states of an effective Pöschl–Teller potential created by the scalar field. The two scenaria differ only in the scale of the characteristic localization length. Numerical simulations confirm the validity of the obtained analytical solutions. Additionally we demonstrate how a kink may be used as a mediator driving the dynamics from the critical point and beyond.

  10. Singular Spectrum Near a Singular Point of Friedrichs Model Operators of Absolute Type

    International Nuclear Information System (INIS)

    Iakovlev, Serguei I.

    2006-01-01

    In L 2 (R) we consider a family of self adjoint operators of the Friedrichs model: A m =|t| m +V. Here |t| m is the operator of multiplication by the corresponding function of the independent variable t element of R, and (perturbation) is a trace-class integral operator with a continuous Hermitian kernel ν(t,x) satisfying some smoothness condition. These absolute type operators have one singular point of order m>0. Conditions on the kernel ν(t,x) are found guaranteeing the absence of the point spectrum and the singular continuous one of such operators near the origin. These conditions are actually necessary and sufficient. They depend on the finiteness of the rank of a perturbation operator and on the order of singularity. The sharpness of these conditions is confirmed by counterexamples

  11. Kinetic modeling of particle acceleration in a solar null point reconnection region

    DEFF Research Database (Denmark)

    Baumann, Gisela; Haugbølle, Troels; Nordlund, Åke

    2013-01-01

    The primary focus of this paper is on the particle acceleration mechanism in solar coronal 3D reconnection null-point regions. Starting from a potential field extrapolation of a SOHO magnetogram taken on 2002 November 16, we first performed MHD simulations with horizontal motions observed by SOHO...... particles and 3.5 billion grid cells of size 17.5\\,km --- these simulations offer a new opportunity to study particle acceleration in solar-like settings....... applied to the photospheric boundary of the computational box. After a build-up of electric current in the fan-plane of the null-point, a sub-section of the evolved MHD data was used as initial and boundary conditions for a kinetic particle-in-cell model of the plasma. We find that sub...

  12. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Merging LIDAR digital terrain model with direct observed elevation points for urban flood numerical simulation

    Science.gov (United States)

    Arrighi, Chiara; Campo, Lorenzo

    2017-04-01

    In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.

  14. Bright solitons in coupled defocusing NLS equation supported by coupling: Application to Bose-Einstein condensation

    International Nuclear Information System (INIS)

    Adhikari, Sadhan K.

    2005-01-01

    We demonstrate the formation of bright solitons in coupled self-defocusing nonlinear Schroedinger (NLS) equation supported by attractive coupling. As an application we use a time-dependent dynamical mean-field model to study the formation of stable bright solitons in two-component repulsive Bose-Einstein condensates (BECs) supported by interspecies attraction in a quasi one-dimensional geometry. When all interactions are repulsive, there cannot be bright solitons. However, bright solitons can be formed in two-component repulsive BECs for a sufficiently attractive interspecies interaction, which induces an attractive effective interaction among bosons of same type

  15. Scission-point model of nuclear fission based on deformed-shell effects

    International Nuclear Information System (INIS)

    Wilkins, B.D.; Steinberg, E.P.; Chasman, R.R.

    1976-01-01

    A static model of nuclear fission is proposed based on the assumption of statistical equilibrium among collective degrees of freedom at the scission point. The relative probabilities of formation of complementary fission fragment pairs are determined from the relative potential energies of a system of two nearly touching, coaxial spheroids with quadrupole deformations. The total potential energy of the system at the scission point is calculated as the sum of liquid-drop and shell- and pairing-correction terms for each spheroid, and Coulomb and nuclear potential terms describing the interaction between them. The fissioning system at the scission point is characterized by three parameters: the distance between the tips of the spheroids (d), the intrinsic excitation energy of the fragments (tau/sub int/), and a collective temperature (T/sub coll/). No attempt is made to adjust these parameters to give optimum fits to experimental data, but rather, a single choice of values for d, tau/sub int/, and T/sub coll/ is used in the calculations for all fissioning systems. The general trends of the distributions of mass, nuclear charge, and kinetic energy in the fission of a wide range of nuclides from Po to Fm are well reproduced in the calculations. The major influence of the deformed-shell corrections for neutrons is indicated and provides a convenient framework for the interpretation of observed trends in the data and for the prediction of new results. The scission-point configurations derived from the model provide an interpretation of the ''saw-tooth'' neutron emission curve as well as previously unexplained observations on the variation of TKE for isotopes of U, Pu, Cm, and Cf; structure in the width of total kinetic energy release as a function of fragment mass ratio; and a difference in threshold energies for symmetric and asymmetric mass splits in the fission of Ra and Ac isotopes

  16. ON THE INFLUENTIAL POINTS IN THE FUNCTIONAL CIRCULAR RELATIONSHIP MODELS WITH AN APPLICATION ON WIND DATA

    Directory of Open Access Journals (Sweden)

    ALi Hassan Abuzaid

    2013-12-01

    Full Text Available If the interest is to calibrate two instruments then the functional relationship model is more appropriate than regression models. Fitting a straight line when both variables are circular and subject to errors has not received much attention. In this paper, we consider the problem of detecting influential points in two functional relationship models for circular variables. The first is based on the simple circular regression the (SC, while the last is derived from the complex linear regression the (CL.   The covariance matrices are derived and then the COVRATIO statistics are formulated for both models. The cut-off points are obtained and the power of performance is assessed via simulation studies.   The performance of COVRATIO statistics depends on the concentration of error, sample size and level of contamination. In the case of linear relationship between two circular variables COVRATIO statistics of the (SC model performs better than the (CL.  On the other hand, a novel diagram, the so-called spoke plot, is utilized to detect possible influential points For illustration purposes, the proposed procedures are applied on real data of wind directions measured by two different instruments. COVRATIO statistics and the spoke plot were able to identify two observations as influential points. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;}

  17. Simulation of agricultural non-point source pollution in Xichuan by using SWAT model

    Science.gov (United States)

    Xing, Linan; Zuo, Jiane; Liu, Fenglin; Zhang, Xiaohui; Cao, Qiguang

    2018-02-01

    This paper evaluated the applicability of using SWAT to access agricultural non-point source pollution in Xichuan area. In order to build the model, DEM, soil sort and land use map, climate monitoring data were collected as basic database. The SWAT model was calibrated and validated for the SWAT was carried out using streamflow, suspended solids, total phosphorus and total nitrogen records from 2009 to 2011. Errors, coefficient of determination and Nash-Sutcliffe coefficient were considered to evaluate the applicability. The coefficient of determination were 0.96, 0.66, 0.55 and 0.66 for streamflow, SS, TN, and TP, respectively. Nash-Sutcliffe coefficient were 0.93, 0.5, 0.52 and 0.63, respectively. The results all meet the requirements. It suggested that the SWAT model can simulate the study area.

  18. Leading bureaucracies to the tipping point: An alternative model of multiple stable equilibrium levels of corruption

    Science.gov (United States)

    Caulkins, Jonathan P.; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F.; Kort, Peter M.; Novak, Andreas J.; Seidl, Andrea

    2013-01-01

    We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure’s actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader. PMID:23565027

  19. Leading bureaucracies to the tipping point: An alternative model of multiple stable equilibrium levels of corruption.

    Science.gov (United States)

    Caulkins, Jonathan P; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F; Kort, Peter M; Novak, Andreas J; Seidl, Andrea

    2013-03-16

    We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure's actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader.

  20. Geometrical origin of tricritical points of various U(1) lattice models

    International Nuclear Information System (INIS)

    Janke, W.; Kleiert, H.

    1989-01-01

    The authors review the dual relationship between various compact U(1) lattice models and Abelian Higgs models, the latter being the disorder field theories of line-like topological excitations in the system. The authors point out that the predicted first-order transitions in the Abelian Higgs models (Coleman-Weinberg mechanism) are, in three dimensions, in contradiction with direct numerical investigations in the compact U(1) formulation since these yield continuous transitions in the major part of the phase diagram. In four dimensions, there are indications from Monte Carlo data for a similar situation. Concentrating on the strong-coupling expansion in terms of geometrical objects, surfaces or lines, with certain statistical weights, the authors present semi-quantitative arguments explaining the observed cross-over from first-order to continuous transitions by the balance between the lowest two weights (2:1 ratio) of these geometrical objects

  1. Surrogate runner model for draft tube losses computation within a wide range of operating points

    International Nuclear Information System (INIS)

    Susan-Resiga, R; Ciocan, T; Muntean, S; De Colombel, T; Leroy, P

    2014-01-01

    We introduce a quasi two-dimensional (Q2D) methodology for assessing the swirling flow exiting the runner of hydraulic turbines at arbitrary operating points, within a wide operating range. The Q2D model does not need actual runner computations, and as a result it represents a surrogate runner model for a-priori assessment of the swirling flow ingested by the draft tube. The axial, radial and circumferential velocity components are computed on a conical section located immediately downstream the runner blades trailing edge, then used as inlet conditions for regular draft tube computations. The main advantage of our model is that it allows the determination of the draft tube losses within the intended turbine operating range in the early design stages of a new or refurbished runner, thus providing a robust and systematic methodology to meet the optimal requirements for the flow at the runner outlet

  2. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  3. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Energy Technology Data Exchange (ETDEWEB)

    Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)

    2014-07-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)

  4. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    Science.gov (United States)

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. High cortisol awakening response is associated with an impairment of the effect of bright light therapy

    DEFF Research Database (Denmark)

    Martiny, Klaus Per Juul; Lunde, Marianne Anita; Undén, M

    2009-01-01

    OBJECTIVE: We investigated the predictive validity of the cortisol awakening response (CAR) in patients with non-seasonal major depression. METHOD: Patients were treated with sertraline in combination with bright or dim light therapy for a 5-week period. Saliva cortisol levels were measured in 63...... patients, as an awakening profile, before medication and light therapy started. The CAR was calculated by using three time-points: awakening and 20 and 60 min after awakening. RESULTS: Patients with low CAR had a very substantial effect of bright light therapy compared with dim light therapy, whereas...... patients with a high CAR had no effect of bright light therapy compared with dim light therapy. CONCLUSION: High CAR was associated with an impairment of the effect of bright light therapy. This result raises the question of whether bright light acts through a mechanism different from...

  6. The bright-bright and bright-dark mode coupling-based planar metamaterial for plasmonic EIT-like effect

    Science.gov (United States)

    Yu, Wei; Meng, Hongyun; Chen, Zhangjie; Li, Xianping; Zhang, Xing; Wang, Faqiang; Wei, Zhongchao; Tan, Chunhua; Huang, Xuguang; Li, Shuti

    2018-05-01

    In this paper, we propose a novel planar metamaterial structure for the electromagnetically induced transparency (EIT)-like effect, which consists of a split-ring resonator (SRR) and a pair of metal strips. The simulated results indicate that a single transparency window can be realized in the symmetry situation, which originates from the bright-bright mode coupling. Further, a dual-band EIT-like effect can be achieved in the asymmetry situation, which is due to the bright-bright mode coupling and bright-dark mode coupling, respectively. Different EIT-like effect can be simultaneously achieved in the proposed structure with the different situations. It is of certain significance for the study of EIT-like effect.

  7. Comparisons of Satellite Soil Moisture, an Energy Balance Model Driven by LST Data and Point Measurements

    Science.gov (United States)

    Laiolo, Paola; Gabellani, Simone; Rudari, Roberto; Boni, Giorgio; Puca, Silvia

    2013-04-01

    Soil moisture plays a fundamental role in the partitioning of mass and energy fluxes between land surface and atmosphere, thereby influencing climate and weather, and it is important in determining the rainfall-runoff response of catchments; moreover, in hydrological modelling and flood forecasting, a correct definition of moisture conditions is a key factor for accurate predictions. Different sources of information for the estimation of the soil moisture state are currently available: satellite data, point measurements and model predictions. All are affected by intrinsic uncertainty. Among different satellite sensors that can be used for soil moisture estimation three major groups can be distinguished: passive microwave sensors (e.g., SSMI), active sensors (e.g. SAR, Scatterometers), and optical sensors (e.g. Spectroradiometers). The last two families, mainly because of their temporal and spatial resolution seem the most suitable for hydrological applications In this work soil moisture point measurements from 10 sensors in the Italian territory are compared of with the satellite products both from the HSAF project SM-OBS-2, derived from the ASCAT scatterometer, and from ACHAB, an operative energy balance model that assimilate LST data derived from MSG and furnishes daily an evaporative fraction index related to soil moisture content for all the Italian region. Distributed comparison of the ACHAB and SM-OBS-2 on the whole Italian territory are performed too.

  8. METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS

    Directory of Open Access Journals (Sweden)

    E. V. Dikareva

    2015-01-01

    Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.

  9. A Low-Cost Maximum Power Point Tracking System Based on Neural Network Inverse Model Controller

    Directory of Open Access Journals (Sweden)

    Carlos Robles Algarín

    2018-01-01

    Full Text Available This work presents the design, modeling, and implementation of a neural network inverse model controller for tracking the maximum power point of a photovoltaic (PV module. A nonlinear autoregressive network with exogenous inputs (NARX was implemented in a serial-parallel architecture. The PV module mathematical modeling was developed, a buck converter was designed to operate in the continuous conduction mode with a switching frequency of 20 KHz, and the dynamic neural controller was designed using the Neural Network Toolbox from Matlab/Simulink (MathWorks, Natick, MA, USA, and it was implemented on an open-hardware Arduino Mega board. To obtain the reference signals for the NARX and determine the 65 W PV module behavior, a system made of a 0.8 W PV cell, a temperature sensor, a voltage sensor and a static neural network, was used. To evaluate performance a comparison with the P&O traditional algorithm was done in terms of response time and oscillations around the operating point. Simulation results demonstrated the superiority of neural controller over the P&O. Implementation results showed that approximately the same power is obtained with both controllers, but the P&O controller presents oscillations between 7 W and 10 W, in contrast to the inverse controller, which had oscillations between 1 W and 2 W.

  10. SIMO optical wireless links with nonzero boresight pointing errors over M modeled turbulence channels

    Science.gov (United States)

    Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.

    2017-11-01

    Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.

  11. The scalar-scalar-tensor inflationary three-point function in the axion monodromy model

    Science.gov (United States)

    Chowdhury, Debika; Sreenath, V.; Sriramkumar, L.

    2016-11-01

    The axion monodromy model involves a canonical scalar field that is governed by a linear potential with superimposed modulations. The modulations in the potential are responsible for a resonant behavior which gives rise to persisting oscillations in the scalar and, to a smaller extent, in the tensor power spectra. Interestingly, such spectra have been shown to lead to an improved fit to the cosmological data than the more conventional, nearly scale invariant, primordial power spectra. The scalar bi-spectrum in the model too exhibits continued modulations and the resonance is known to boost the amplitude of the scalar non-Gaussianity parameter to rather large values. An analytical expression for the scalar bi-spectrum had been arrived at earlier which, in fact, has been used to compare the model with the cosmic microwave background anisotropies at the level of three-point functions involving scalars. In this work, with future applications in mind, we arrive at a similar analytical template for the scalar-scalar-tensor cross-correlation. We also analytically establish the consistency relation (in the squeezed limit) for this three-point function. We conclude with a summary of the main results obtained.

  12. The scalar-scalar-tensor inflationary three-point function in the axion monodromy model

    International Nuclear Information System (INIS)

    Chowdhury, Debika; Sriramkumar, L.; Sreenath, V.

    2016-01-01

    The axion monodromy model involves a canonical scalar field that is governed by a linear potential with superimposed modulations. The modulations in the potential are responsible for a resonant behavior which gives rise to persisting oscillations in the scalar and, to a smaller extent, in the tensor power spectra. Interestingly, such spectra have been shown to lead to an improved fit to the cosmological data than the more conventional, nearly scale invariant, primordial power spectra. The scalar bi-spectrum in the model too exhibits continued modulations and the resonance is known to boost the amplitude of the scalar non-Gaussianity parameter to rather large values. An analytical expression for the scalar bi-spectrum had been arrived at earlier which, in fact, has been used to compare the model with the cosmic microwave background anisotropies at the level of three-point functions involving scalars. In this work, with future applications in mind, we arrive at a similar analytical template for the scalar-scalar-tensor cross-correlation. We also analytically establish the consistency relation (in the squeezed limit) for this three-point function. We conclude with a summary of the main results obtained.

  13. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  14. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    Science.gov (United States)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  15. INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION

    Directory of Open Access Journals (Sweden)

    L. Díaz-Vilariño

    2016-06-01

    Full Text Available In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  16. Fixed point and anomaly mediation in partial {\\boldsymbol{N}}=2 supersymmetric standard models

    Science.gov (United States)

    Yin, Wen

    2018-01-01

    Motivated by the simple toroidal compactification of extra-dimensional SUSY theories, we investigate a partial N = 2 supersymmetric (SUSY) extension of the standard model which has an N = 2 SUSY sector and an N = 1 SUSY sector. We point out that below the scale of the partial breaking of N = 2 to N = 1, the ratio of Yukawa to gauge couplings embedded in the original N = 2 gauge interaction in the N = 2 sector becomes greater due to a fixed point. Since at the partial breaking scale the sfermion masses in the N = 2 sector are suppressed due to the N = 2 non-renormalization theorem, the anomaly mediation effect becomes important. If dominant, the anomaly-induced masses for the sfermions in the N = 2 sector are almost UV-insensitive due to the fixed point. Interestingly, these masses are always positive, i.e. there is no tachyonic slepton problem. From an example model, we show interesting phenomena differing from ordinary MSSM. In particular, the dark matter particle can be a sbino, i.e. the scalar component of the N = 2 vector multiplet of {{U}}{(1)}Y. To obtain the correct dark matter abundance, the mass of the sbino, as well as the MSSM sparticles in the N = 2 sector which have a typical mass pattern of anomaly mediation, is required to be small. Therefore, this scenario can be tested and confirmed in the LHC and may be further confirmed by the measurement of the N = 2 Yukawa couplings in future colliders. This model can explain dark matter, the muon g-2 anomaly, and gauge coupling unification, and relaxes some ordinary problems within the MSSM. It is also compatible with thermal leptogenesis.

  17. New models to compute solar global hourly irradiation from point cloudiness

    International Nuclear Information System (INIS)

    Badescu, Viorel; Dumitrescu, Alexandru

    2013-01-01

    Highlights: ► Kasten–Czeplak cloudy sky model is tested under the climate of South-Eastern Europe. ► Very simple cloudy sky models based on atmospheric transmission factors. ► Transmission factors are nonlinear functions of the cosine of zenith angle. ► New models’ performance is good for low and intermediate cloudy skies. ► Models show good performance when applied in stations other than the origin station. - Abstract: The Kasten–Czeplak (KC) model [16] is tested against data measured in five meteorological stations covering the latitudes and longitudes of Romania (South-Eastern Europe). Generally, the KC cloudy sky model underestimates the measured values. Its performance is (marginally) good enough for point cloudiness C = 0–1. The performance is good for skies with few clouds (C < 0.3), good enough for skies with medium amount of clouds (C = 0.3–0.7) and poor on very cloudy and overcast skies. New very simple empirical cloudy sky models are proposed. They bring two novelties in respect to KC model. First, new basic clear sky models are used, which evaluate separately the direct and diffuse radiation, respectively. Second, some of the new models assume the atmospheric transmission factor is a nonlinear function of the cosine of zenith angle Z. The performance of the new models is generally better than that of the KC model, for all cloudiness classes. One class of models (called S4) has been further tested. The sub-model S4TOT has been obtained by fitting the generic model S4 to all available data, for all stations. Generally, S4TOT has good accuracy in all stations, for low and intermediate cloudy skies (C < 0.7). The accuracy of S4TOT is good and good enough at intermediate zenith angles (Z = 30–70°) but worse for small and larger zenith angles (Z = 0–30° and Z = 70–85°, respectively). Several S4 sub-models were tested in stations different from the origin station. Almost all sub-models have good or good enough performance for skies

  18. Primordial blackholes and gravitational waves for an inflection-point model of inflation

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sayantan [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108 (India); Mazumdar, Anupam [Consortium for Fundamental Physics, Physics Department, Lancaster University, LA1 4YB (United Kingdom)

    2014-06-02

    In this article we provide a new closed relationship between cosmic abundance of primordial gravitational waves and primordial blackholes that originated from initial inflationary perturbations for inflection-point models of inflation where inflation occurs below the Planck scale. The current Planck constraint on tensor-to-scalar ratio, running of the spectral tilt, and from the abundance of dark matter content in the universe, we can deduce a strict bound on the current abundance of primordial blackholes to be within a range, 9.99712×10{sup −3}<Ω{sub PBH}h{sup 2}<9.99736×10{sup −3}.

  19. A business process model as a starting point for tight cooperation among organizations

    Directory of Open Access Journals (Sweden)

    O. Mysliveček

    2006-01-01

    Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success. 

  20. Zero-point energies in the two-center shell model. II

    International Nuclear Information System (INIS)

    Reinhard, P.-G.

    1978-01-01

    The zero-point energy (ZPE) contained in the potential-energy surface of a two-center shell model (TCSM) is evaluated. In extension of previous work, the author uses here the full TCSM with l.s force, smoothing and asymmetry. The results show a critical dependence on the height of the potential barrier between the centers. The ZPE turns out to be non-negligible along the fission path for 236 U, and even more so for lighter systems. It is negligible for surface quadrupole motion and it is just on the fringe of being negligible for motion along the asymmetry coordinate. (Auth.)

  1. Zero-point energies in the two-center shell model

    International Nuclear Information System (INIS)

    Reinhard, P.G.

    1975-01-01

    The zero-point energies (ZPE) contained in the potential-energy surfaces (PES) of a two-center shell model are evaluated. For the c.m. motion of the system as a whole the kinetic ZPE was found to be negligible, whereas it varies appreciably for the rotational and oscillation modes (about 5-9MeV). For the latter two modes the ZPE also depends sensitively on the changing pairing structure, which can induce strong local fluctuations, particularly in light nuclei. The potential ZPE is very small for heavy nuclei, but might just become important in light nuclei. (Auth.)

  2. Dynamical simulation of a linear sigma model near the critical point

    Energy Technology Data Exchange (ETDEWEB)

    Wesp, Christian; Meistrenko, Alex; Greiner, Carsten [Institut fuer Theoretische Physik, Goethe-Universitaet Frankfurt, Max-von-Laue-Strasse 1, D-60438 Frankfurt (Germany); Hees, Hendrik van [Frankfurt Institute for Advanced Studies, Ruth-Moufang-Strasse 1, D-60438 Frankfurt (Germany)

    2014-07-01

    The intention of this study is the search for signatures of the chiral phase transition. To investigate the impact of fluctuations, e.g. of the baryon number, on the transition or a critical point, the linear sigma model is treated in a dynamical 3+1D numerical simulation. Chiral fields are approximated as classical fields, quarks are described by quasi particles in a Vlasov equation. Additional dynamic is implemented by quark-quark and quark-sigma-field interaction. For a consistent description of field-particle interactions, a new Monte-Carlo-Langevin-like formalism has been developed and is discussed.

  3. A note on a boundary sine-Gordon model at the free-Fermion point

    Science.gov (United States)

    Murgan, Rajan

    2018-02-01

    We investigate the free-Fermion point of a boundary sine-Gordon model with nondiagonal boundary interactions for the ground state using auxiliary functions obtained from T  -  Q equations of a corresponding inhomogeneous open spin-\\frac{1}{2} XXZ chain with nondiagonal boundary terms. In particular, we obtain the Casimir energy. Our result for the Casimir energy is shown to agree with the result from the TBA approach. The analytical result for the effective central charge in the ultraviolet (UV) limit is also verified from the plots of effective central charge for intermediate values of volume.

  4. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  5. Theory of fluctuations and parametric noise in a point nuclear reactor model

    International Nuclear Information System (INIS)

    Rodriguez, M.A.; San Miguel, M.; Sancho, J.M.

    1984-01-01

    We present a joint description of internal fluctuations and parametric noise in a point nuclear reactor model in which delayed neutrons and a detector are considered. We obtain kinetic equations for the first moments and define effective kinetic parameters which take into account the effect of parametric Gaussian white noise. We comment on the validity of Langevin approximations for this problem. We propose a general method to deal with weak but otherwise arbitrary non-white parametric noise. Exact kinetic equations are derived for Gaussian non-white noise. (author)

  6. A sliding point contact model for the finite element structures code EURDYN

    International Nuclear Information System (INIS)

    Smith, B.L.

    1986-01-01

    A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)

  7. Fragmentation approach to the point-island model with hindered aggregation: Accessing the barrier energy

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2017-07-01

    We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i . In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length la. For la=0 the islands behave as perfect sinks while for la→∞ they behave as reflecting boundaries. For intermediate values of la, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and la. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and la). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of la and i =1 , 2, and 3.

  8. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  9. Spatial dispersion modeling of 90Sr by point cumulative semivariogram at Keban Dam Lake, Turkey

    International Nuclear Information System (INIS)

    Kuelahci, Fatih; Sen, Zekai

    2007-01-01

    Spatial analysis of 90 Sr artificial radionuclide in consequence of global fallout and Chernobyl nuclear accident has been carried out by using the point cumulative semivariogram (PCSV) technique based on 40 surface water station measurements in Keban Dam Lake during March, April, and May 2006. This technique is a convenient tool in obtaining the regional variability features around each sampling point, which yields the structural effects also in the vicinity of the same point. It presents the regional effect of all the other sites within the study area on the site concerned. In order to see to change of 90 Sr, the five models are constituted. Additionally, it provides a measure of cumulative similarity of the regional variable, 90 Sr, around any measurement site and hence it is possible to draw regional similarity maps at any desired distance around each station. In this paper, such similarity maps are also drawn for a set of distances. 90 Sr activities in lake that distance approximately 4.5 km from stations show the maximum similarity

  10. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Science.gov (United States)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  11. Point process models for localization and interdependence of punctate cellular structures.

    Science.gov (United States)

    Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F

    2016-07-01

    Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.

  12. Brightness and darkness as perceptual dimensions

    NARCIS (Netherlands)

    Vladusich, T.; Lucassen, M.P.; Cornelissen, F.W.

    2007-01-01

    A common-sense assumption concerning visual perception states that brightness and darkness cannot coexist at a given spatial location. One corollary of this assumption is that achromatic colors, or perceived grey shades, are contained in a one-dimensional (1-D) space varying from bright to dark. The

  13. SURFACE PHOTOMETRY OF LOW SURFACE BRIGHTNESS GALAXIES

    NARCIS (Netherlands)

    DEBLOK, WJG; VANDERHULST, JM; BOTHUN, GD

    1995-01-01

    Low surface brightness (LSB) galaxies are galaxies dominated by an exponential disc whose central surface brightness is much fainter than the value of mu(B)(0) = 21.65 +/- 0.30 mag arcsec(-2) found by Freeman. In this paper we present broadband photometry of a sample of 21 late-type LSB galaxies.

  14. Unidentified point sources in the IRAS minisurvey

    Science.gov (United States)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  15. Modeling non-point source pollutants in the vadose zone: Back to the basics

    Science.gov (United States)

    Corwin, Dennis L.; Letey, John, Jr.; Carrillo, Marcia L. K.

    More than ever before in the history of scientific investigation, modeling is viewed as a fundamental component of the scientific method because of the relatively recent development of the computer. No longer must the scientific investigator be confined to artificially isolated studies of individual processes that can lead to oversimplified and sometimes erroneous conceptions of larger phenomena. Computer models now enable scientists to attack problems related to open systems such as climatic change, and the assessment of environmental impacts, where the whole of the interactive processes are greater than the sum of their isolated components. Environmental assessment involves the determination of change of some constituent over time. This change can be measured in real time or predicted with a model. The advantage of prediction, like preventative medicine, is that it can be used to alter the occurrence of potentially detrimental conditions before they are manifest. The much greater efficiency of preventative, rather than remedial, efforts strongly justifies the need for an ability to accurately model environmental contaminants such as non-point source (NPS) pollutants. However, the environmental modeling advances that have accompanied computer technological development are a mixed blessing. Where once we had a plethora of discordant data without a holistic theory, now the pendulum has swung so that we suffer from a growing stockpile of models of which a significant number have never been confirmed or even attempts made to confirm them. Modeling has become an end in itself rather than a means because of limited research funding, the high cost of field studies, limitations in time and patience, difficulty in cooperative research and pressure to publish papers as quickly as possible. Modeling and experimentation should be ongoing processes that reciprocally enhance one another with sound, comprehensive experiments serving as the building blocks of models and models

  16. Brightness Alteration with Interweaving Contours

    Directory of Open Access Journals (Sweden)

    Sergio Roncato

    2012-12-01

    Full Text Available Chromatic induction is observed whenever the perceived colour of a target surface shifts towards the hue of a neighbouring surface. Some vivid manifestations may be seen in a white background where thin coloured lines have been drawn (assimilation or when lines of different colours are collinear (neon effect or adjacent (watercolour to each other. This study examines a particular colour induction that manifests in concomitance with an opposite effect of colour saturation (or anti-spread. The two phenomena can be observed when a repetitive pattern is drawn in which outline thin contours intercept wider contours or surfaces, colour spreading appear to fill the surface occupied by surfaces or thick lines whereas the background traversed by thin lines is seen as brighter or filled of a saturated white. These phenomena were first observed by Bozzi (1975 and Kanizsa (1979 in figural conditions that did not allow them to document their conjunction. Here we illustrate various manifestations of this twofold phenomenon and compare its effects with the known effects of brightness and colour induction. Some conjectures on the nature of these effects are discussed.

  17. Point-Mass Model for Nano-Patterning Using Dip-Pen Nanolithography (DPN

    Directory of Open Access Journals (Sweden)

    Seok-Won Kang

    2011-04-01

    Full Text Available Micro-cantilevers are frequently used as scanning probes and sensors in micro-electromechanical systems (MEMS. Usually micro-cantilever based sensors operate by detecting changes in cantilever vibration modes (e.g., bending or torsional vibration frequency or surface stresses - when a target analyte is adsorbed on the surface. The catalyst for chemical reactions (i.e., for a specific analyte can be deposited on micro-cantilevers by using Dip-Pen Nanolithography (DPN technique. In this study, we simulate the vibration mode in nano-patterning processes by using a Point-Mass Model (or Lumped Parameter Model. The results from the simulations are used to derive the stability of writing and reading mode for a particular driving frequency during the DPN process. In addition, we analyze the sensitivity of the tip-sample interaction forces in fluid (ink solution by utilizing the Derjaguin-Muller-Toporov (DMT contact theory.

  18. Photovoltaic System Modeling with Fuzzy Logic Based Maximum Power Point Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Hasan Mahamudul

    2013-01-01

    Full Text Available This paper represents a novel modeling technique of PV module with a fuzzy logic based MPPT algorithm and boost converter in Simulink environment. The prime contributions of this work are simplification of PV modeling technique and implementation of fuzzy based MPPT system to track maximum power efficiently. The main highlighted points of this paper are to demonstrate the precise control of the duty cycle with respect to various atmospheric conditions, illustration of PV characteristic curves, and operation analysis of the converter. The proposed system has been applied for three different PV modules SOLKAR 36 W, BP MSX 60 W, and KC85T 87 W. Finally the resultant data has been compared with the theoretical prediction and company specified value to ensure the validity of the system.

  19. Investigation and modeling of the anomalous yield point phenomenon in pure tantalum

    Energy Technology Data Exchange (ETDEWEB)

    Colas, D. [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 5209 CNRS, Université de Bourgogne, 9 avenue Alain Savary, BP 17870, 21078 Dijon Cedex (France); CEA Valduc, 21120 Is-sur-Tille (France); Mines ParisTech, Centre des Matériaux, CNRS, UMR 7633, BP 87, 91003 Evry Cedex (France); Finot, E. [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 5209 CNRS, Université de Bourgogne, 9 avenue Alain Savary, BP 17870, 21078 Dijon Cedex (France); Flouriot, S. [CEA Valduc, 21120 Is-sur-Tille (France); Forest, S. [Mines ParisTech, Centre des Matériaux, CNRS, UMR 7633, BP 87, 91003 Evry Cedex (France); Mazière, M., E-mail: matthieu.maziere@mines-paristech.fr [Mines ParisTech, Centre des Matériaux, CNRS, UMR 7633, BP 87, 91003 Evry Cedex (France); Paris, T. [CEA Valduc, 21120 Is-sur-Tille (France)

    2014-10-06

    The monotonic and cyclic behavior of commercially pure tantalum has been investigated at room temperature, in order to capture and understand the occurrence of the anomalous yield point phenomenon. Interrupted tests have been performed, with strain reversals (tensile or compressive loading) after an aging period. The stress drop is attributed to the interactions between dislocations and solute atoms (oxygen) and its macroscopic occurrence is not systematically observed. InfraRed Thermography (IRT) measurements supported by Scanning Electron Microscopy (SEM) pictures of the polished gauge length of a specimen during an interrupted tensile test reveal the nucleation and propagation of a strain localization band. The KEMC (Kubin–Estrin–McCormick) phenomenological model accounting for strain aging has been identified for several loadings and strain rates at room temperature. Simulations on full specimen using the KEMC model do not show strain localization, because of the competition between viscosity and strain localization. However, a slight misalignment of the sample can promote strain localization.

  20. Discrete Model Predictive Control-Based Maximum Power Point Tracking for PV Systems: Overview and Evaluation

    DEFF Research Database (Denmark)

    Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.

    2018-01-01

    The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...

  1. Bohr model description of the critical point for the first order shape phase transition

    Science.gov (United States)

    Budaca, R.; Buganu, P.; Budaca, A. I.

    2018-01-01

    The critical point of the shape phase transition between spherical and axially deformed nuclei is described by a collective Bohr Hamiltonian with a sextic potential having simultaneous spherical and deformed minima of the same depth. The particular choice of the potential as well as the scaled and decoupled nature of the total Hamiltonian leads to a model with a single free parameter connected to the height of the barrier which separates the two minima. The solutions are found through the diagonalization in a basis of Bessel functions. The basis is optimized for each value of the free parameter by means of a boundary deformation which assures the convergence of the solutions for a fixed basis dimension. Analyzing the spectral properties of the model, as a function of the barrier height, revealed instances with shape coexisting features which are considered for detailed numerical applications.

  2. Bohr model description of the critical point for the first order shape phase transition

    Directory of Open Access Journals (Sweden)

    R. Budaca

    2018-01-01

    Full Text Available The critical point of the shape phase transition between spherical and axially deformed nuclei is described by a collective Bohr Hamiltonian with a sextic potential having simultaneous spherical and deformed minima of the same depth. The particular choice of the potential as well as the scaled and decoupled nature of the total Hamiltonian leads to a model with a single free parameter connected to the height of the barrier which separates the two minima. The solutions are found through the diagonalization in a basis of Bessel functions. The basis is optimized for each value of the free parameter by means of a boundary deformation which assures the convergence of the solutions for a fixed basis dimension. Analyzing the spectral properties of the model, as a function of the barrier height, revealed instances with shape coexisting features which are considered for detailed numerical applications.

  3. Mutual information as a two-point correlation function in stochastic lattice models

    International Nuclear Information System (INIS)

    Müller, Ulrich; Hinrichsen, Haye

    2013-01-01

    In statistical physics entropy is usually introduced as a global quantity which expresses the amount of information that would be needed to specify the microscopic configuration of a system. However, for lattice models with infinitely many possible configurations per lattice site it is also meaningful to introduce entropy as a local observable that describes the information content of a single lattice site. Likewise, the mutual information between two sites can be interpreted as a two-point correlation function which quantifies how much information a lattice site has about the state of another one and vice versa. Studying a particular growth model we demonstrate that the mutual information exhibits scaling properties that are consistent with the established phenomenological scaling picture. (paper)

  4. EVALUATION MODEL FOR PAVEMENT SURFACE DISTRESS ON 3D POINT CLOUDS FROM MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    K. Aoki

    2012-07-01

    Full Text Available This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS. The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments’ specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  5. A new calibration model for pointing a radio telescope that considers nonlinear errors in the azimuth axis

    International Nuclear Information System (INIS)

    Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min

    2014-01-01

    A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model

  6. Numerical simulation of a lattice polymer model at its integrable point

    International Nuclear Information System (INIS)

    Bedini, A; Owczarek, A L; Prellberg, T

    2013-01-01

    We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions x t = 1/12 and x h = −5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003). (paper)

  7. Liquid-liquid critical point in a simple analytical model of water

    Science.gov (United States)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  8. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    Science.gov (United States)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  9. Is zero-point energy physical? A toy model for Casimir-like effect

    International Nuclear Information System (INIS)

    Nikolić, Hrvoje

    2017-01-01

    Zero-point energy is generally known to be unphysical. Casimir effect, however, is often presented as a counterexample, giving rise to a conceptual confusion. To resolve the confusion we study foundational aspects of Casimir effect at a qualitative level, but also at a quantitative level within a simple toy model with only 3 degrees of freedom. In particular, we point out that Casimir vacuum is not a state without photons, and not a ground state for a Hamiltonian that can describe Casimir force. Instead, Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation, and it is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces. - Highlights: • A toy model for Casimir-like effect with only 3 degrees of freedom is constructed. • Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation. • Casimir vacuum is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. • At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces.

  10. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    International Nuclear Information System (INIS)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-01-01

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  11. A GLOBAL SOLUTION TO TOPOLOGICAL RECONSTRUCTION OF BUILDING ROOF MODELS FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    J. Yan

    2016-06-01

    Full Text Available This paper presents a global solution to building roof topological reconstruction from LiDAR point clouds. Starting with segmented roof planes from building LiDAR points, a BSP (binary space partitioning algorithm is used to partition the bounding box of the building into volumetric cells, whose geometric features and their topology are simultaneously determined. To resolve the inside/outside labelling problem of cells, a global energy function considering surface visibility and spatial regularization between adjacent cells is constructed and minimized via graph cuts. As a result, the cells are labelled as either inside or outside, where the planar surfaces between the inside and outside form the reconstructed building model. Two LiDAR data sets of Yangjiang (China and Wuhan University (China are used in the study. Experimental results show that the completeness of reconstructed roof planes is 87.5%. Comparing with existing data-driven approaches, the proposed approach is global. Roof faces and edges as well as their topology can be determined at one time via minimization of an energy function. Besides, this approach is robust to partial absence of roof planes and tends to reconstruct roof models with visibility-consistent surfaces.

  12. FUNDAMENTAL ASPECTS OF EPISODIC ACCRETION CHEMISTRY EXPLORED WITH SINGLE-POINT MODELS

    International Nuclear Information System (INIS)

    Visser, Ruud; Bergin, Edwin A.

    2012-01-01

    We explore a set of single-point chemical models to study the fundamental chemical aspects of episodic accretion in low-mass embedded protostars. Our goal is twofold: (1) to understand how the repeated heating and cooling of the envelope affects the abundances of CO and related species; and (2) to identify chemical tracers that can be used as a novel probe of the timescales and other physical aspects of episodic accretion. We develop a set of single-point models that serve as a general prescription for how the chemical composition of a protostellar envelope is altered by episodic accretion. The main effect of each accretion burst is to drive CO ice off the grains in part of the envelope. The duration of the subsequent quiescent stage (before the next burst hits) is similar to or shorter than the freeze-out timescale of CO, allowing the chemical effects of a burst to linger long after the burst has ended. We predict that the resulting excess of gas-phase CO can be observed with single-dish or interferometer facilities as evidence of an accretion burst in the past 10 3 -10 4 yr.

  13. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    Directory of Open Access Journals (Sweden)

    Lucía Díaz-Vilariño

    2015-02-01

    Full Text Available 3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  14. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Science.gov (United States)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  15. TIME-RESOLVED EMISSION FROM BRIGHT HOT PIXELS OF AN ACTIVE REGION OBSERVED IN THE EUV BAND WITH SDO/AIA AND MULTI-STRANDED LOOP MODELING

    Energy Technology Data Exchange (ETDEWEB)

    Tajfirouze, E.; Reale, F.; Petralia, A. [Dipartimento di Fisica e Chimica, Università di Palermo, Piazza del Parlamento 1, I-90134 (Italy); Testa, P., E-mail: aastex-help@aas.org [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2016-01-01

    Evidence of small amounts of very hot plasma has been found in active regions and might be an indication of impulsive heating released at spatial scales smaller than the cross-section of a single loop. We investigate the heating and substructure of coronal loops in the core of one such active region by analyzing the light curves in the smallest resolution elements of solar observations in two EUV channels (94 and 335 Å) from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. We model the evolution of a bundle of strands heated by a storm of nanoflares by means of a hydrodynamic 0D loop model (EBTEL). The light curves obtained from a random combination of those of single strands are compared to the observed light curves either in a single pixel or in a row of pixels, simultaneously in the two channels, and using two independent methods: an artificial intelligent system (Probabilistic Neural Network) and a simple cross-correlation technique. We explore the space of the parameters to constrain the distribution of the heat pulses, their duration, their spatial size, and, as a feedback on the data, their signatures on the light curves. From both methods the best agreement is obtained for a relatively large population of events (1000) with a short duration (less than 1 minute) and a relatively shallow distribution (power law with index 1.5) in a limited energy range (1.5 decades). The feedback on the data indicates that bumps in the light curves, especially in the 94 Å channel, are signatures of a heating excess that occurred a few minutes before.

  16. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    Science.gov (United States)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  17. Near-real-time regional troposphere models for the GNSS precise point positioning technique

    International Nuclear Information System (INIS)

    Hadas, T; Kaplon, J; Bosy, J; Sierny, J; Wilgan, K

    2013-01-01

    The GNSS precise point positioning (PPP) technique requires high quality product (orbits and clocks) application, since their error directly affects the quality of positioning. For real-time purposes it is possible to utilize ultra-rapid precise orbits and clocks which are disseminated through the Internet. In order to eliminate as many unknown parameters as possible, one may introduce external information on zenith troposphere delay (ZTD). It is desirable that the a priori model is accurate and reliable, especially for real-time application. One of the open problems in GNSS positioning is troposphere delay modelling on the basis of ground meteorological observations. Institute of Geodesy and Geoinformatics of Wroclaw University of Environmental and Life Sciences (IGG WUELS) has developed two independent regional troposphere models for the territory of Poland. The first one is estimated in near-real-time regime using GNSS data from a Polish ground-based augmentation system named ASG-EUPOS established by Polish Head Office of Geodesy and Cartography (GUGiK) in 2008. The second one is based on meteorological parameters (temperature, pressure and humidity) gathered from various meteorological networks operating over the area of Poland and surrounding countries. This paper describes the methodology of both model calculation and verification. It also presents results of applying various ZTD models into kinematic PPP in the post-processing mode using Bernese GPS Software. Positioning results were used to assess the quality of the developed models during changing weather conditions. Finally, the impact of model application to simulated real-time PPP on precision, accuracy and convergence time is discussed. (paper)

  18. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  19. APPROACH TO SYNTHESIS OF PASSIVE INFRARED DETECTORS BASED ON QUASI-POINT MODEL OF QUALIFIED INTRUDER

    Directory of Open Access Journals (Sweden)

    I. V. Bilizhenko

    2017-01-01

    Full Text Available Subject of Research. The paper deals with synthesis of passive infra red (PIR detectors with enhanced detection capability of qualified intruder who uses different types of detection countermeasures: the choice of specific movement direction and disguise in infrared band. Methods. We propose an approach based on quasi-point model of qualified intruder. It includes: separation of model priority parameters, formation of partial detection patterns adapted to those parameters and multi channel signal processing. Main Results. Quasi-pointmodel of qualified intruder consisting of different fragments was suggested. Power density difference was used for model parameters estimation. Criteria were formulated for detection pattern parameters choice on the basis of model parameters. Pyroelectric sensor with nine sensitive elements was applied for increasing the signal information content. Multi-channel processing with multiple partial detection patterns was proposed optimized for detection of intruder's specific movement direction. Practical Relevance. Developed functional device diagram can be realized both by hardware and software and is applicable as one of detection channels for dual technology passive infrared and microwave detectors.

  20. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  1. Insights into mortality patterns and causes of death through a process point of view model.

    Science.gov (United States)

    Anderson, James J; Li, Ting; Sharrow, David J

    2017-02-01

    Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.

  2. Process-based coastal erosion modeling for Drew Point (North Slope, Alaska)

    Science.gov (United States)

    Ravens, Thomas M.; Jones, Benjamin M.; Zhang, Jinlin; Arp, Christopher D.; Schmutz, Joel A.

    2012-01-01

    A predictive, coastal erosion/shoreline change model has been developed for a small coastal segment near Drew Point, Beaufort Sea, Alaska. This coastal setting has experienced a dramatic increase in erosion since the early 2000’s. The bluffs at this site are 3-4 m tall and consist of ice-wedge bounded blocks of fine-grained sediments cemented by ice-rich permafrost and capped with a thin organic layer. The bluffs are typically fronted by a narrow (∼ 5  m wide) beach or none at all. During a storm surge, the sea contacts the base of the bluff and a niche is formed through thermal and mechanical erosion. The niche grows both vertically and laterally and eventually undermines the bluff, leading to block failure or collapse. The fallen block is then eroded both thermally and mechanically by waves and currents, which must occur before a new niche forming episode may begin. The erosion model explicitly accounts for and integrates a number of these processes including: (1) storm surge generation resulting from wind and atmospheric forcing, (2) erosional niche growth resulting from wave-induced turbulent heat transfer and sediment transport (using the Kobayashi niche erosion model), and (3) thermal and mechanical erosion of the fallen block. The model was calibrated with historic shoreline change data for one time period (1979-2002), and validated with a later time period (2002-2007).

  3. Radial Basis Functional Model of Multi-Point Dieless Forming Process for Springback Reduction and Compensation

    Directory of Open Access Journals (Sweden)

    Misganaw Abebe

    2017-11-01

    Full Text Available Springback in multi-point dieless forming (MDF is a common problem because of the small deformation and blank holder free boundary condition. Numerical simulations are widely used in sheet metal forming to predict the springback. However, the computational time in using the numerical tools is time costly to find the optimal process parameters value. This study proposes radial basis function (RBF to replace the numerical simulation model by using statistical analyses that are based on a design of experiment (DOE. Punch holding time, blank thickness, and curvature radius are chosen as effective process parameters for determining the springback. The Latin hypercube DOE method facilitates statistical analyses and the extraction of a prediction model in the experimental process parameter domain. Finite element (FE simulation model is conducted in the ABAQUS commercial software to generate the springback responses of the training and testing samples. The genetic algorithm is applied to find the optimal value for reducing and compensating the induced springback for the different blank thicknesses using the developed RBF prediction model. Finally, the RBF numerical result is verified by comparing with the FE simulation result of the optimal process parameters and both results show that the springback is almost negligible from the target shape.

  4. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    Science.gov (United States)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  5. Bright Sparks of Our Future!

    Science.gov (United States)

    Riordan, Naoimh

    2016-04-01

    My name is Naoimh Riordan and I am the Vice Principal of Rockboro Primary School in Cork City, South of Ireland. I am a full time class primary teacher and I teach 4th class, my students are aged between 9-10 years. My passion for education has developed over the years and grown towards STEM (Science, Technology, Engineering and Mathematics) subjects. I believe these subjects are the way forward for our future. My passion and beliefs are driven by the unique after school programme that I have developed. It is titled "Sparks" coming from the term Bright Sparks. "Sparks" is an after school programme with a difference where the STEM subjects are concentrated on through lessons such as Science, Veterinary Science Computer Animation /Coding, Eco engineering, Robotics, Magical Maths, Chess and Creative Writing. All these subjects are taught through activity based learning and are one-hour long each week for a ten-week term. "Sparks" is fully inclusive and non-selective which gives all students of any level of ability an opportunity to engage into these subjects. "Sparks" is open to all primary students in County Cork. The "Sparks" after school programme is taught by tutors from the different Universities and Colleges in Cork City. It works very well because the tutor brings their knowledge, skills and specialised equipment from their respective universities and in turn the tutor gains invaluable teaching practise, can trial a pilot programme in a chosen STEM subject and gain an insight into what works in the physical classroom.

  6. Designers predict a bright future

    International Nuclear Information System (INIS)

    Statton, T.D.

    1996-01-01

    As power plant designers and builders, there is a bright future for the industry. The demand for electricity will continue to grow, and the need for new plants will increase accordingly. But companies that develop and supply these plants must adapt to new ways of doing business if they expect to see the dawn of this new age. Several factors will have a profound effect on the generation and use of electricity in future years. Instant communications now reach all corners of the globe, making people everywhere aspire to a higher standard of living. The economic surge needed to satisfy these appetites will, in turn, be fed by a network of suppliers who are themselves restructuring to serve global markets, unimpeded by past nationalistic barriers to trade. The strong correlation between economic progress and the growing demand for electricity is well recognized. A ready supply of affordable electricity is a necessary underpinning for any economic expansion. As economies advance and jobs increase, electric demand grows geometrically, fueled by an ever-improving quality of life. Coupled with increasing demand is the worldwide trend toward privatization of the generation industry. The reasons may vary in different parts of the world, but the effect is the same--companies are battling intensely for the right to build or purchase generating facilities. Those companies, like the industry they serve, are themselves in a period of transition. Once a closed, monopolistic group of owners in a predominantly services-based market, they are, thanks to competitive forces, being driven steadily toward a product-based structure

  7. SKY BRIGHTNESS AND TRANSPARENCY IN THE i-BAND AT DOME A, ANTARCTICA

    International Nuclear Information System (INIS)

    Zou Hu; Zhou Xu; Jiang Zhaoji; Hu Jingyao; Ma Jun; Ashley, M. C. B.; Luong-Van, D. M.; Storey, J. W. V.; Cui Xiangqun; Feng Longlong; Gong Xuefei; Kulesa, C. A.; Lawrence, J. S.; Liu Genrong; Moore, A. M.; Pennypacker, C. R.; Travouillon, T.; Qin Weijia; Sun Bo; Shang Zhaohui

    2010-01-01

    The i-band observing conditions at Dome A on the Antarctic plateau have been investigated using data acquired during 2008 with the Chinese Small Telescope Array. The sky brightness, variations in atmospheric transparency, cloud cover, and the presence of aurorae are obtained from these images. The median sky brightness of moonless clear nights is 20.5 mag arcsec -2 in the SDSS i band at the south celestial pole (which includes a contribution of about 0.06 mag from diffuse Galactic light). The median over all Moon phases in the Antarctic winter is about 19.8 mag arcsec -2 . There were no thick clouds in 2008. We model contributions of the Sun and the Moon to the sky background to obtain the relationship between the sky brightness and transparency. Aurorae are identified by comparing the observed sky brightness to the sky brightness expected from this model. About 2% of the images are affected by relatively strong aurorae.

  8. A global reference model of Curie-point depths based on EMAG2

    Science.gov (United States)

    Li, Chun-Feng; Lu, Yu; Wang, Jian

    2017-03-01

    In this paper, we use a robust inversion algorithm, which we have tested in many regional studies, to obtain the first global model of Curie-point depth (GCDM) from magnetic anomaly inversion based on fractal magnetization. Statistically, the oceanic Curie depth mean is smaller than the continental one, but continental Curie depths are almost bimodal, showing shallow Curie points in some old cratons. Oceanic Curie depths show modifications by hydrothermal circulations in young oceanic lithosphere and thermal perturbations in old oceanic lithosphere. Oceanic Curie depths also show strong dependence on the spreading rate along active spreading centers. Curie depths and heat flow are correlated, following optimal theoretical curves of average thermal conductivities K = ~2.0 W(m°C)-1 for the ocean and K = ~2.5 W(m°C)-1 for the continent. The calculated heat flow from Curie depths and large-interval gridding of measured heat flow all indicate that the global heat flow average is about 70.0 mW/m2, leading to a global heat loss ranging from ~34.6 to 36.6 TW.

  9. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  10. Glue detection based on teaching points constraint and tracking model of pixel convolution

    Science.gov (United States)

    Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen

    2018-01-01

    On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.

  11. Fast Outage Probability Simulation for FSO Links with a Generalized Pointing Error Model

    KAUST Repository

    Ben Issaid, Chaouki

    2017-02-07

    Over the past few years, free-space optical (FSO) communication has gained significant attention. In fact, FSO can provide cost-effective and unlicensed links, with high-bandwidth capacity and low error rate, making it an exciting alternative to traditional wireless radio-frequency communication systems. However, the system performance is affected not only by the presence of atmospheric turbulences, which occur due to random fluctuations in the air refractive index but also by the existence of pointing errors. Metrics, such as the outage probability which quantifies the probability that the instantaneous signal-to-noise ratio is smaller than a given threshold, can be used to analyze the performance of this system. In this work, we consider weak and strong turbulence regimes, and we study the outage probability of an FSO communication system under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results.

  12. Reliable four-point flexion test and model for die-to-wafer direct bonding

    Energy Technology Data Exchange (ETDEWEB)

    Tabata, T., E-mail: toshiyuki.tabata@cea.fr; Sanchez, L.; Fournel, F.; Moriceau, H. [Univ. Grenoble Alpes, F-38000 Grenoble, France and CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2015-07-07

    For many years, wafer-to-wafer (W2W) direct bonding has been very developed particularly in terms of bonding energy measurement and bonding mechanism comprehension. Nowadays, die-to-wafer (D2W) direct bonding has gained significant attention, for instance, in photonics and microelectro-mechanics, which supposes controlled and reliable fabrication processes. So, whatever the stuck materials may be, it is not obvious whether bonded D2W structures have the same bonding strength as bonded W2W ones, because of possible edge effects of dies. For that reason, it has been strongly required to develop a bonding energy measurement technique which is suitable for D2W structures. In this paper, both D2W- and W2W-type standard SiO{sub 2}-to-SiO{sub 2} direct bonding samples are fabricated from the same full-wafer bonding. Modifications of the four-point flexion test (4PT) technique and applications for measuring D2W direct bonding energies are reported. Thus, the comparison between the modified 4PT and the double-cantilever beam techniques is drawn, also considering possible impacts of the conditions of measures such as the water stress corrosion at the debonding interface and the friction error at the loading contact points. Finally, reliability of a modified technique and a new model established for measuring D2W direct bonding energies is demonstrated.

  13. Establishing the long-term fuel management scheme using point reactivity model

    International Nuclear Information System (INIS)

    Park, Yong-Soo; Kim, Jae-Hak; Lee, Young-Ouk; Song, Jae-Woong; Zee, Sung-Kyun

    1994-01-01

    A new approach to establish the long-term fuel management scheme is presented in this paper. The point reactivity model is used to predict the core average reactivity. An attempt to calculate batchwise power fraction is introduced through the two-dimensional nodal power algorithm based on the modified one-group diffusion equation and the number of fuel assemblies on the core periphery. Suggested is an empirical formula to estimate the radial leakage reactivity with ripe core design experience reflected. This approach predicts the cycle lengths and the discharge burnups of individual fuel batches up to an equilibrium core when the proper input data such as batch enrichment, batch size, type and content of burnable poison and reloading strategies are given. Eight benchmark calculations demonstrate that the new approach used in this study is reasonably accurate and highly efficient for the purpose of scoping calculation when compared with design code predictions. (author)

  14. Signals for the QCD phase transition and critical point in a Langevin dynamical model

    International Nuclear Information System (INIS)

    Herold, Christoph; Bleicher, Marcus; Yan, Yu-Peng

    2013-01-01

    The search for the critical point is one of the central issues that will be investigated in the upcoming FAIR project. For a profound theoretical understanding of the expected signals we go beyond thermodynamic studies and present a fully dynamical model for the chiral and deconfinement phase transition in heavy ion collisions. The corresponding order parameters are propagated by Langevin equations of motions on a thermal background provided by a fluid dynamically expanding plasma of quarks. By that we are able to describe nonequilibrium effects occurring during the rapid expansion of a hot fireball. For an evolution through the phase transition the formation of a supercooled phase and its subsequent decay crucially influence the trajectories in the phase diagram and lead to a significant reheating of the quark medium at highest baryon densities. Furthermore, we find inhomogeneous structures with high density domains along the first order transition line within single events.

  15. Multivariate analysis and extraction of parameters in resistive RAMs using the Quantum Point Contact model

    Science.gov (United States)

    Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.

    2018-01-01

    A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.

  16. Logarithmic two-point correlation functions from a z=2 Lifshitz model

    International Nuclear Information System (INIS)

    Zingg, T.

    2014-01-01

    The Einstein-Proca action is known to have asymptotically locally Lifshitz spacetimes as classical solutions. For dynamical exponent z=2, two-point correlation functions for fluctuations around such a geometry are derived analytically. It is found that the retarded correlators are stable in the sense that all quasinormal modes are situated in the lower half-plane of complex frequencies. Correlators in the longitudinal channel exhibit features that are reminiscent of a structure usually obtained in field theories that are logarithmic, i.e. contain an indecomposable but non-diagonalizable highest weight representation. This provides further evidence for conjecturing the model at hand as a candidate for a gravity dual of a logarithmic field theory with anisotropic scaling symmetry

  17. Ab-initio modelling of thermodynamics and kinetics of point defects in indium oxide

    International Nuclear Information System (INIS)

    Agoston, Peter; Klein, Andreas; Albe, Karsten; Erhart, Paul

    2008-01-01

    The electrical and optical properties of indium oxide films strongly vary with the processing parameters. Especially the oxygen partial pressure and temperature determine properties like electrical conductivity, composition and transparency. Since this material owes its remarkable properties like the intrinsic n-type conductivity to its defect chemistry, it is important to understand both, the equilibrium defect thermodynamics and kinetics of the intrinsic point defects. In this contribution we present a defect model based on DFT total energy calculations using the GGA+U method. Further, the nudged elastic band method is employed in order to obtain a set of migration barriers for each defect species. Due to the complicated crystal structure of indium oxide a Kinetic Monte-Carlo algorithm was implemented, which allows to determine diffusion coefficients. The bulk tracer diffusion constant is predicted as a function of oxygen partial pressure, Fermi level and temperature for the pure material

  18. Point Defects in 3D and 1D Nanomaterials: The Model Case of Titanium Dioxide

    International Nuclear Information System (INIS)

    Knauth, Philippe

    2010-01-01

    Titanium dioxide is one of the most important oxides for applications in energy and environment, such as solar cells, photocatalysis, lithium-ion batteries. In recent years, new forms of titanium dioxide with unusual structure and/or morphology have been developed, including nanocrystals, nanotubes or nanowires. We have studied in detail the point defect chemistry in nanocrystalline TiO 2 powders and ceramics. There can be a change from predominant Frenkel to Schottky disorder, depending on the experimental conditions, e.g. temperature and oxygen partial pressure. We have also studied the local environment of various dopants with similar ion radius, but different ion charge (Zn 2+ , Y 3+ , Sn 4+ , Zr 4+ , Nb 5+ ) in TiO 2 nanopowders and nanoceramics by Extended X-Ray Absorption Fine Structure (EXAFS) Spectroscopy. Interfacial segregation of acceptors was demonstrated, but donors and isovalent ions do not segregate. An electrostatic 'space charge' segregation model is applied, which explains well the observed phenomena.

  19. Effects of positive potential in the catastrophe theory study of the point model for bumpy tori

    Energy Technology Data Exchange (ETDEWEB)

    Punjabi, A; Vahala, G [College of William and Mary, Williamsburg, VA (USA). Dept. of Physics

    1985-02-01

    With positive ambipolar potential, ion non-resonant neoclassical transport leads to increased particle confinement times. In certain regimes of filling pressure, microwave powers (ECRH and ICRH) and positive potential, new folds can now emerge from previously degenerate equilibrium surfaces allowing for distinct C, T, and M modes of operation. A comparison in the equilibrium fold structure is also made between (i) equal particle and energy confinement times, and (ii) particle confinement times enhanced over the energy confinement time. The nonlinear time evolution of these point model equations is considered and confirms the delay convention occurrences at the fold edges. It is clearly seen that the time-asymptotic equilibrium state is very sensitive, not only to the values of the control parameters (neutral density, ambipolar electrostatic potential, electron and ion cyclotron power densities) but also to the initial conditions on the plasma density, and electron and ion temperatures.

  20. Mass effects in three-point chronological current correlators in n-dimensional multifermion models

    International Nuclear Information System (INIS)

    Kucheryavyj, V.I.

    1991-01-01

    Three-types of quantities associated with three-point chronological fermion-current correlators having arbitrary Lorentz and internal structure are calculated in the n-dimensional multifermion models with different masses. The analysis of vector and axial-vector Ward identities for regular (finite) and dimensionally regularized values of these quantities is carried out. Quantum corrections to the canonical Ward identities are obtained. These corrections are generally homogenious functions of zeroth order in masses and under some definite conditions they are reduced to known axial-vector anomalies. The structure and properties of quantum corrections to AVV and AAA correlators in the four-dimension space-time are investigated in detail

  1. On the realism of the re-engineered simple point charge water model

    International Nuclear Information System (INIS)

    Chialvo, A.A.

    1996-01-01

    The realism of the recently proposed high-temperature reparameterization of the simple point charge (SPC) water model [C. D. Berweger, W. F. van Gunsteren, and F. Mueller-Plathe, Chem. Phys. Lett. 232, 429 (1995)] is tested by comparing the simulated microstructure and dielectric properties to the available experimental data. The test indicates that the new parameterization fails dramatically to describe the microstructural and dielectric properties of water at high temperature; it predicts rather strong short-range site endash site pair correlations, even stronger than those for water at ambient conditions, and a threefold smaller dielectric constant. Moreover, the resulting microstructure suggests that the high-temperature force-field parameters would predict a twofold higher critical density. The failure of the high-temperature parameterization is analyzed and some suggestions on alternative choices of the target properties for the weak-coupling are discussed. copyright 1996 American Institute of Physics

  2. The bright and choked gamma-ray burst contribution to the IceCube and ANTARES low-energy excess

    Science.gov (United States)

    Denton, Peter B.; Tamborra, Irene

    2018-04-01

    The increasing statistics of the high-energy neutrino flux observed by the IceCube Observatory points towards an excess of events above the atmospheric neutrino background in the 30–400 TeV energy range. Such an excess is compatible with the findings of the ANTARES Telescope and it would naturally imply the possibility that more than one source class contributes to the observed flux. Electromagnetically hidden sources have been invoked to interpret this excess of events at low energies. By adopting a unified model for the electromagnetically bright and choked gamma-ray bursts and taking into account particle acceleration at the internal and collimation shock radii, we discuss whether bright and choked bursts are viable candidates. Our findings suggest that, although producing a copious neutrino flux, choked and bright astrophysical jets cannot be the dominant sources of the excess of neutrino events. A fine tuning of the model parameters or distinct scenarios for choked jets should be invoked in order to explain the low-energy neutrino data of IceCube and ANTARES.

  3. Bright boys the making of information technology

    CERN Document Server

    Green, Tom

    2010-01-01

    Everything has a beginning. None was more profound-and quite as unexpected-than Information Technology. Here for the first time is the untold story of how our new age came to be and the bright boys who made it happen. What began on the bare floor of an old laundry building eventually grew to rival in size the Manhattan Project. The unexpected consequence of that journey was huge---what we now know as Information Technology. For sixty years the bright boys have been totally anonymous while their achievements have become a way of life for all of us. "Bright Boys" brings them home. By 1950 they'd

  4. An Extension of the Miller Equilibrium Model into the X-Point Region

    Science.gov (United States)

    Hill, M. D.; King, R. W.; Stacey, W. M.

    2017-10-01

    The Miller equilibrium model has been extended to better model the flux surfaces in the outer region of the plasma and scrape-off layer, including the poloidally non-uniform flux surface expansion that occurs in the X-point region(s) of diverted tokamaks. Equations for elongation and triangularity are modified to include a poloidally varying component and grad-r, which is used in the calculation of the poloidal magnetic field, is rederived. Initial results suggest that strong quantitative agreement with experimental flux surface reconstructions and strong qualitative agreement with poloidal magnetic fields can be obtained using this model. Applications are discussed. A major new application is the automatic generation of the computation mesh in the plasma edge, scrape-off layer, plenum and divertor regions for use in the GTNEUT neutral particle transport code, enabling this powerful analysis code to be routinely run in experimental analyses. Work supported by US DOE under DE-FC02-04ER54698.

  5. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    Science.gov (United States)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  6. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Directory of Open Access Journals (Sweden)

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  7. Economic-environmental modeling of point source pollution in Jefferson County, Alabama, USA.

    Science.gov (United States)

    Kebede, Ellene; Schreiner, Dean F; Huluka, Gobena

    2002-05-01

    This paper uses an integrated economic-environmental model to assess the point source pollution from major industries in Jefferson County, Northern Alabama. Industrial expansion generates employment, income, and tax revenue for the public sector; however, it is also often associated with the discharge of chemical pollutants. Jefferson County is one of the largest industrial counties in Alabama that experienced smog warnings and ambient ozone concentration, 1996-1999. Past studies of chemical discharge from industries have used models to assess the pollution impact of individual plants. This study, however, uses an extended Input-Output (I-O) economic model with pollution emission coefficients to assess direct and indirect pollutant emission for several major industries in Jefferson County. The major findings of the study are: (a) the principal emission by the selected industries are volatile organic compounds (VOC) and these contribute to the ambient ozone concentration; (b) the direct and indirect emissions are significantly higher than the direct emission by some industries, indicating that an isolated analysis will underestimate the emission by an industry; (c) while low emission coefficient industries may suggest industry choice they may also emit the most hazardous chemicals. This study is limited by the assumptions made, and the data availability, however it provides a useful analytical tool for direct and cumulative emission estimation and generates insights on the complexity in choice of industries.

  8. Different faces of chaos in FRW models with scalar fields-geometrical point of view

    International Nuclear Information System (INIS)

    Hrycyna, Orest; Szydlowski, Marek

    2006-01-01

    FRW cosmologies with conformally coupled scalar fields are investigated in a geometrical way by the means of geodesics of the Jacobi metric. In this model of dynamics, trajectories in the configuration space are represented by geodesics. Because of the singular nature of the Jacobi metric on the boundary set -bar D of the domain of admissible motion, the geodesics change the cone sectors several times (or an infinite number of times) in the neighborhood of the singular set -bar D. We show that this singular set contains interesting information about the dynamical complexity of the model. Firstly, this set can be used as a Poincare surface for construction of Poincare sections, and the trajectories then have the recurrence property. We also investigate the distribution of the intersection points. Secondly, the full classification of periodic orbits in the configuration space is performed and existence of UPO is demonstrated. Our general conclusion is that, although the presented model leads to several complications, like divergence of curvature invariants as a measure of sensitive dependence on initial conditions, some global results can be obtained and some additional physical insight is gained from using the conformal Jacobi metric. We also study the complex behavior of trajectories in terms of symbolic dynamics

  9. Plasmon point spread functions: How do we model plasmon-mediated emission processes?

    Science.gov (United States)

    Willets, Katherine A.

    2014-02-01

    A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole point spread functions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.

  10. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    Science.gov (United States)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  11. Bright to dim oscillatory response of the Neurospora circadian oscillator.

    Science.gov (United States)

    Gooch, Van D; Johnson, Alicia E; Larrondo, Luis F; Loros, Jennifer J; Dunlap, Jay C

    2014-02-01

    The fungus Neurospora crassa constitutes an important model system extensively used in chronobiology. Several studies have addressed how environmental cues, such as light, can reset or synchronize a circadian system. By means of an optimized firefly luciferase reporter gene and a controllable lighting system, we show that Neurospora can display molecular circadian rhythms in dim light when cultures receive bright light prior to entering dim light conditions. We refer to this behavior as the "bright to dim oscillatory response" (BDOR). The bright light treatment can be applied up to 76 h prior to dim exposure, and it can be as short as 15 min in duration. We have characterized this response in respect to the duration of the light pulse, the time of the light pulse before dim, the intensity of dim light, and the oscillation dynamics in dim light. Although the molecular mechanism that drives the BDOR remains obscure, these findings suggest that a long-term memory of bright light exists as part of the circadian molecular components. It is important to consider the ecological significance of such dim light responses in respect to how organisms naturally maintain their timing mechanism in moonlight.

  12. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Science.gov (United States)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  13. The Atacama Cosmology Telescope: Beam Measurements and the Microwave Brightness Temperatures of Uranus and Saturn

    Science.gov (United States)

    Hasselfield, Matthew; Moodley, Kavilan; Bond, J. Richard; Das, Sudeep; Devlin, Mark J.; Dunkley, Joanna; Dunner, Rolando; Fowler, Joseph W.; Gallardo, Patricio; Gralla, Megan B.; hide

    2013-01-01

    We describe the measurement of the beam profiles and window functions for the Atacama Cosmology Telescope (ACT), which operated from 2007 to 2010 with kilopixel bolometer arrays centered at 148, 218, and 277 GHz. Maps of Saturn are used to measure the beam shape in each array and for each season of observations. Radial profiles are transformed to Fourier space in a way that preserves the spatial correlations in the beam uncertainty to derive window functions relevant for angular power spectrum analysis. Several corrections are applied to the resulting beam transforms, including an empirical correction measured from the final cosmic microwave background (CMB) survey maps to account for the effects of mild pointing variation and alignment errors. Observations of Uranus made regularly throughout each observing season are used to measure the effects of atmospheric opacity and to monitor deviations in telescope focus over the season. Using the WMAP-based calibration of the ACT maps to the CMB blackbody, we obtain precise measurements of the brightness temperatures of the Uranus and Saturn disks at effective frequencies of 149 and 219 GHz. For Uranus we obtain thermodynamic brightness temperatures T(149/U) = 106.7 +/- 2.2 K and T(219/U) = 100.1 +/- 3.1 K. For Saturn, we model the effects of the ring opacity and emission using a simple model and obtain resulting (unobscured) disk temperatures of T(149/S) = 137.3 +/- 3.2 K and T(219/S) = 137.3 +/- 4.7 K.

  14. Relationship between ammonia stomatal compensation point and nitrogen metabolism in arable crops: Current status of knowledge and potential modelling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Massad, Raia Silvia [Institut National de la Recherche Agronomique (INRA), Environnement et Grandes Cultures, 78850 Thiverval-Grignon (France)], E-mail: massad@grignon.inra.fr; Loubet, Benjamin; Tuzet, Andree; Cellier, Pierre [Institut National de la Recherche Agronomique (INRA), Environnement et Grandes Cultures, 78850 Thiverval-Grignon (France)

    2008-08-15

    The ammonia stomatal compensation point of plants is determined by leaf temperature, ammonium concentration ([NH{sub 4}{sup +}]{sub apo}) and pH of the apoplastic solution. The later two depend on the adjacent cells metabolism and on leaf inputs and outputs through the xylem and phloem. Until now only empirical models have been designed to model the ammonia stomatal compensation point, except the model of Riedo et al. (2002. Coupling soil-plant-atmosphere exchange of ammonia with ecosystem functioning in grasslands. Ecological Modelling 158, 83-110), which represents the exchanges between the plant's nitrogen pools. The first step to model the ammonia stomatal compensation point is to adequately model [NH{sub 4}{sup +}]{sub apo}. This [NH{sub 4}{sup +}]{sub apo} has been studied experimentally, but there are currently no process-based quantitative models describing its relation to plant metabolism and environmental conditions. This study summarizes the processes involved in determining the ammonia stomatal compensation point at the leaf scale and qualitatively evaluates the ability of existing whole plant N and C models to include a model for [NH{sub 4}{sup +}]{sub apo}. - A model for ammonia stomatal compensation point at the leaf level scale was developed.

  15. Relationship between ammonia stomatal compensation point and nitrogen metabolism in arable crops: Current status of knowledge and potential modelling approaches

    International Nuclear Information System (INIS)

    Massad, Raia Silvia; Loubet, Benjamin; Tuzet, Andree; Cellier, Pierre

    2008-01-01

    The ammonia stomatal compensation point of plants is determined by leaf temperature, ammonium concentration ([NH 4 + ] apo ) and pH of the apoplastic solution. The later two depend on the adjacent cells metabolism and on leaf inputs and outputs through the xylem and phloem. Until now only empirical models have been designed to model the ammonia stomatal compensation point, except the model of Riedo et al. (2002. Coupling soil-plant-atmosphere exchange of ammonia with ecosystem functioning in grasslands. Ecological Modelling 158, 83-110), which represents the exchanges between the plant's nitrogen pools. The first step to model the ammonia stomatal compensation point is to adequately model [NH 4 + ] apo . This [NH 4 + ] apo has been studied experimentally, but there are currently no process-based quantitative models describing its relation to plant metabolism and environmental conditions. This study summarizes the processes involved in determining the ammonia stomatal compensation point at the leaf scale and qualitatively evaluates the ability of existing whole plant N and C models to include a model for [NH 4 + ] apo . - A model for ammonia stomatal compensation point at the leaf level scale was developed

  16. Bright Idea: Solar Energy Primer.

    Science.gov (United States)

    Missouri State Dept. of Natural Resources, Jefferson City.

    This booklet is intended to address questions most frequently asked about solar energy. It provides basic information and a starting point for prospective solar energy users. Information includes discussion of solar space heating, solar water heating, and solar greenhouses. (Author/RE)

  17. Development, validation and application of multi-point kinetics model in RELAP5 for analysis of asymmetric nuclear transients

    Energy Technology Data Exchange (ETDEWEB)

    Pradhan, Santosh K., E-mail: santosh@aerb.gov.in [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai 400094 (India); Obaidurrahman, K. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai 400094 (India); Iyer, Kannan N. [Department of Mechanical Engineering, IIT Bombay, Mumbai 400076 (India); Gaikwad, Avinash J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai 400094 (India)

    2016-04-15

    Highlights: • A multi-point kinetics model is developed for RELAP5 system thermal hydraulics code. • Model is validated against extensive 3D kinetics code. • RELAP5 multi-point kinetics formulation is used to investigate critical break for LOCA in PHWR. - Abstract: Point kinetics approach in system code RELAP5 limits its use for many of the reactivity induced transients, which involve asymmetric core behaviour. Development of fully coupled 3D core kinetics code with system thermal-hydraulics is the ultimate requirement in this regard; however coupling and validation of 3D kinetics module with system code is cumbersome and it also requires access to source code. An intermediate approach with multi-point kinetics is appropriate and relatively easy to implement for analysis of several asymmetric transients for large cores. Multi-point kinetics formulation is based on dividing the entire core into several regions and solving ODEs describing kinetics in each region. These regions are interconnected by spatial coupling coefficients which are estimated from diffusion theory approximation. This model offers an advantage that associated ordinary differential equations (ODEs) governing multi-point kinetics formulation can be solved using numerical methods to the desired level of accuracy and thus allows formulation based on user defined control variables, i.e., without disturbing the source code and hence also avoiding associated coupling issues. Euler's method has been used in the present formulation to solve several coupled ODEs internally at each time step. The results have been verified against inbuilt point-kinetics models of RELAP5 and validated against 3D kinetics code TRIKIN. The model was used to identify the critical break in RIH of a typical large PHWR core. The neutronic asymmetry produced in the core due to the system induced transient was effectively handled by the multi-point kinetics model overcoming the limitation of in-built point kinetics model

  18. Modeling hydrology, groundwater recharge and non-point nitrate loadings in the Himalayan Upper Yamuna basin

    International Nuclear Information System (INIS)

    Narula, Kapil K.; Gosain, A.K.

    2013-01-01

    The mountainous Himalayan watersheds are important hydrologic systems responsible for much of the water supply in the Indian sub-continent. These watersheds are increasingly facing anthropogenic and climate-related pressures that impact spatial and temporal distribution of water availability. This study evaluates temporal and spatial distribution of water availability including groundwater recharge and quality (non-point nitrate loadings) for a Himalayan watershed, namely, the Upper Yamuna watershed (part of the Ganga River basin). The watershed has an area of 11 600 km 2 with elevation ranging from 6300 to 600 m above mean sea level. Soil and Water Assessment Tool (SWAT), a physically-based, time-continuous model, has been used to simulate the land phase of the hydrological cycle, to obtain streamflows, groundwater recharge, and nitrate (NO 3 ) load distributions in various components of runoff. The hydrological SWAT model is integrated with the MODular finite difference groundwater FLOW model (MODFLOW), and Modular 3-Dimensional Multi-Species Transport model (MT3DMS), to obtain groundwater flow and NO 3 transport. Validation of various modules of this integrated model has been done for sub-basins of the Upper Yamuna watershed. Results on surface runoff and groundwater levels obtained as outputs from simulation show a good comparison with the observed streamflows and groundwater levels (Nash–Sutcliffe and R 2 correlations greater than + 0.7). Nitrate loading obtained after nitrification, denitrification, and NO 3 removal from unsaturated and shallow aquifer zones is combined with groundwater recharge. Results for nitrate modeling in groundwater aquifers are compared with observed NO 3 concentration and are found to be in good agreement. The study further evaluates the sensitivity of water availability to climate change. Simulations have been made with the weather inputs of climate change scenarios of A2, B2, and A1B for end of the century. Water yield estimates

  19. Time-resolved brightness measurements by streaking

    Science.gov (United States)

    Torrance, Joshua S.; Speirs, Rory W.; McCulloch, Andrew J.; Scholten, Robert E.

    2018-03-01

    Brightness is a key figure of merit for charged particle beams, and time-resolved brightness measurements can elucidate the processes involved in beam creation and manipulation. Here we report on a simple, robust, and widely applicable method for the measurement of beam brightness with temporal resolution by streaking one-dimensional pepperpots, and demonstrate the technique to characterize electron bunches produced from a cold-atom electron source. We demonstrate brightness measurements with 145 ps temporal resolution and a minimum resolvable emittance of 40 nm rad. This technique provides an efficient method of exploring source parameters and will prove useful for examining the efficacy of techniques to counter space-charge expansion, a critical hurdle to achieving single-shot imaging of atomic scale targets.

  20. BrightStat.com: free statistics online.

    Science.gov (United States)

    Stricker, Daniel

    2008-10-01

    Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.

  1. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  2. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  3. Application of catastrophe theory to a point model for bumpy torus with neoclassical non-resonant electrons

    Energy Technology Data Exchange (ETDEWEB)

    Punjabi, A; Vahala, G [College of William and Mary, Williamsburg, VA (USA). Dept. of Physics

    1983-12-01

    The point model for the toroidal core plasma in the ELMO Bumpy Torus (with neoclassical non-resonant electrons) is examined in the light of catastrophe theory. Even though the point model equations do not constitute a gradient dynamic system, the equilibrium surfaces are similar to those of the canonical cusp catastrophe. The point model is then extended to incorporate ion cyclotron resonance heating. A detailed parametric study of the equilibria is presented. Further, the nonlinear time evolution of these equilibria is studied, and it is observed that the point model obeys the delay convention (and hence hysteresis) and shows catastrophes at the fold edges of the equilibrium surfaces. Tentative applications are made to experimental results.

  4. The OPALS Plan for Operations: Use of ISS Trajectory and Attitude Models in the OPALS Pointing Strategy

    Science.gov (United States)

    Abrahamson, Matthew J.; Oaida, Bogdan; Erkmen, Baris

    2013-01-01

    This paper will discuss the OPALS pointing strategy, focusing on incorporation of ISS trajectory and attitude models to build pointing predictions. Methods to extrapolate an ISS prediction based on past data will be discussed and will be compared to periodically published ISS predictions and Two-Line Element (TLE) predictions. The prediction performance will also be measured against GPS states available in telemetry. The performance of the pointing products will be compared to the allocated values in the OPALS pointing budget to assess compliance with requirements.

  5. Modelling Digital Knowledge Transfer: Nurse Supervisors Transforming Learning at Point of Care to Advance Nursing Practice

    Directory of Open Access Journals (Sweden)

    Carey Mather

    2017-05-01

    Full Text Available Limited adoption of mobile technology for informal learning and continuing professional development within Australian healthcare environments has been explained primarily as an issue of insufficient digital and ehealth literacy of healthcare professionals. This study explores nurse supervisors’ use of mobile technology for informal learning and continuing professional development both for their own professional practice, and in their role in modelling digital knowledge transfer, by facilitating the learning and teaching of nursing students in the workplace. A convenience sample of 27 nurse supervisors involved with guiding and supporting undergraduate nurses participated in one of six focus groups held in two states of Australia. Expanding knowledge emerged as the key theme of importance to this group of clinicians. Although nurse supervisors regularly browsed Internet sources for learning and teaching purposes, a mixed understanding of the mobile learning activities that could be included as informal learning or part of formal continuing professional development was detected. Participants need educational preparation and access to mobile learning opportunities to improve and maintain their digital and ehealth literacy to appropriately model digital professionalism with students. Implementation of mobile learning at point of care to enable digital knowledge transfer, augment informal learning for students and patients, and support continuing professional development opportunities is necessary. Embedding digital and ehealth literacy within nursing curricula will promote mobile learning as a legitimate nursing function and advance nursing practice.

  6. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    Science.gov (United States)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  7. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    Science.gov (United States)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  8. Quench dynamics near a quantum critical point: Application to the sine-Gordon model

    International Nuclear Information System (INIS)

    De Grandi, C.; Polkovnikov, A.; Gritsev, V.

    2010-01-01

    We discuss the quench dynamics near a quantum critical point focusing on the sine-Gordon model as a primary example. We suggest a unified approach to sudden and slow quenches, where the tuning parameter λ(t) changes in time as λ(t)∼υt r , based on the adiabatic expansion of the excitation probability in powers of υ. We show that the universal scaling of the excitation probability can be understood through the singularity of the generalized adiabatic susceptibility χ 2r+2 (λ), which for sudden quenches (r=0) reduces to the fidelity susceptibility. In turn this class of susceptibilities is expressed through the moments of the connected correlation function of the quench operator. We analyze the excitations created after a sudden quench of the cosine potential using a combined approach of form-factors expansion and conformal perturbation theory for the low-energy and high-energy sector, respectively. We find the general scaling laws for the probability of exciting the system, the density of excited quasiparticles, the entropy and the heat generated after the quench. In the two limits where the sine-Gordon model maps to hard-core bosons and free massive fermions we provide the exact solutions for the quench dynamics and discuss the finite temperature generalizations.

  9. The case for an internal dynamics model versus equilibrium point control in human movement.

    Science.gov (United States)

    Hinder, Mark R; Milner, Theodore E

    2003-06-15

    The equilibrium point hypothesis (EPH) was conceived as a means whereby the central nervous system could control limb movements by a relatively simple shift in equilibrium position without the need to explicitly compensate for task dynamics. Many recent studies have questioned this view with results that suggest the formation of an internal dynamics model of the specific task. However, supporters of the EPH have argued that these results are not incompatible with the EPH and that there is no reason to abandon it. In this study, we have tested one of the fundamental predictions of the EPH, namely, equifinality. Subjects learned to perform goal-directed wrist flexion movements while a motor provided assistance in proportion to the instantaneous velocity. It was found that the subjects stopped short of the target on the trials where the magnitude of the assistance was randomly decreased, compared to the preceding control trials (P = 0.003), i.e. equifinality was not achieved. This is contrary to the EPH, which predicts that final position should not be affected by external loads that depend purely on velocity. However, such effects are entirely consistent with predictions based on the formation of an internal dynamics model.

  10. Intrinsic point defects in zinc oxide. Modeling of structural, electronic, thermodynamic and kinetic properties

    Energy Technology Data Exchange (ETDEWEB)

    Erhart, P.

    2006-07-01

    The present dissertation deals with the modeling of zinc oxide on the atomic scale employing both quantum mechanical as well as atomistic methods. The first part describes quantum mechanical calculations based on density functional theory of intrinsic point defects in ZnO. To begin with, the geometric and electronic structure of vacancies and oxygen interstitials is explored. In equilibrium oxygen interstitials are found to adopt dumbbell and split interstitial configurations in positive and negative charge states, respectively. Semi-empirical self-interaction corrections allow to improve the agreement between the experimental and the calculated band structure significantly; errors due to the limited size of the supercells can be corrected by employing finite-size scaling. The effect of both band structure corrections and finite-size scaling on defect formation enthalpies and transition levels is explored. Finally, transition paths and barriers for the migration of zinc as well as oxygen vacancies and interstitials are determined. The results allow to interpret diffusion experiments and provide a consistent basis for developing models for device simulation. In the second part an interatomic potential for zinc oxide is derived. To this end, the Pontifix computer code is developed which allows to fit analytic bond-order potentials. The code is subsequently employed to obtain interatomic potentials for Zn-O, Zn-Zn, and O-O interactions. To demonstrate the applicability of the potentials, simulations on defect production by ion irradiation are carried out. (orig.)

  11. Point spread function modeling and image restoration for cone-beam CT

    International Nuclear Information System (INIS)

    Zhang Hua; Shi Yikai; Huang Kuidong; Xu Zhe

    2015-01-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)

  12. CPN-1 models with a θ term and fixed point action

    International Nuclear Information System (INIS)

    Burkhalter, Rudolf; Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2001-01-01

    The topological charge distribution P(Q) is calculated for lattice CP N-1 models. In order to suppress lattice cutoff effects, we employ a fixed point (FP) action. Through transformation of P(Q), we calculate the free energy F(θ) as a function of the θ parameter. For N=4, scaling behavior is observed for P(Q) and F(θ), as well as the correlation lengths ξ(Q). For N=2, however, scaling behavior is not observed, as expected. For comparison, we also make a calculation for the CP 3 model with a standard action. We furthermore pay special attention to the behavior of P(Q) in order to investigate the dynamics of instantons. For this purpose, we carefully consider the behavior of γ eff , which is an effective power of P(Q) (∼exp (-CQ γeff )), and reflects the local behavior of P(Q) as a function of Q. We study γ eff for two cases, the dilute gas approximation based on the Poisson distribution of instantons and the Debye-Hueckel approximation of instanton quarks. In both cases, we find behavior similar to that observed in numerical simulations. (author)

  13. Points of Economic and Innovative Growth: a Model for Organizing the Effective Functioning of the Region

    Directory of Open Access Journals (Sweden)

    D. D. Burkaltseva

    2017-01-01

    Full Text Available Abstract Purpose: the main goal of the article is to build a conceptual model for the organization of effective functioning of the points of economic and innovative growth of the region in modern conditions, taking into account regional and municipal limitations of internal and external nature, with the aim of ensuring economic security, effective interaction of subjects of the "business-power" system Taking into account the influence of institutional factors. Methods: the methodological basis of research in the article is the dialectical method of scientific cognition, the systemic and institutional approach to studying and building an organization for the effective functioning of the regional economy in order to ensure its economic security from internal and external threats. Results: the existing mechanism of interaction "business and power" is considered. The financial stability of economic entities of the Republic of Crimea is determined. The financial independence of the regional budget of the Republic of Crimea has been determined. The dynamics of financing of the Federal Target Program "Social and Economic Development of the Republic of Crimea and Sevastopol until 2020" has been revealed. The regional and municipal restrictions of internal and external nature, which constitute a threat to social and economic development, are indicated. Points of economic and innovative growth at the present stage and their advantages and stages of technical organization of their implementation have been determined. A conceptual model of building effective interaction between subjects of the "business-power" system is proposed taking into account the influence of institutional factors. The conceptual model of organization of effective functioning of points of economic and innovative growth of the region, as a territorial socio-economic system, under modern conditions is constructed. Conclusions and Relevance: we propose to define four

  14. Measuring brightness temperature distributions of plasma bunches

    International Nuclear Information System (INIS)

    Kirko, V.I.; Stadnichenko, I.A.

    1981-01-01

    The possibility of restoration of brightness temperature distribution along plasma jet on the base of a simple ultra high- speed photography and subsequent photometric treatment is shown. The developed technique has been applied for finding spectral radiation intensity and brightness temperature of plasma jets of a tubular gas-cumulative charge and explosive plasma compressor. The problem of shock wave front has been successfully solved and thus distribution of above parameters beginning from the region preceeding the shock wave has been obtained [ru

  15. Bright THz Instrument and Nonlinear THz Science

    Science.gov (United States)

    2017-10-30

    Report: Bright THz Instrument and Nonlinear THz Science The views, opinions and/or findings contained in this report are those of the author(s) and...Number: W911NF-16-1-0436 Organization: University of Rochester Title: Bright THz Instrument and Nonlinear THz Science Report Term: 0-Other Email: xi...exploring new cutting-edge research and broader applications, following the significant development of THz science and technology in the late 80’s, is the

  16. The surface brightness of spiral galaxies

    International Nuclear Information System (INIS)

    Phillipps, S.; Disney, M.

    1983-01-01

    It is proposed that Freeman's discovery that the extrapolated central surface brightness of spiral galaxies is approximately constant can be simply explained if the galaxies contain a spheroidal component which dominates the light in their outer isophotes. Calculations of an effective central surface brightness indicate a wide spread of values. This requires either a wide spread in disc properties or significant spheroidal components or, most probably, both. (author)

  17. The surface brightness of spiral galaxies

    International Nuclear Information System (INIS)

    Disney, M.; Phillipps, S.

    1985-01-01

    The intrinsic surface brightness Ssub(e) of 500 disc galaxies (0<=T<=9) drawn from the Second Reference Catalogue is computed and it is shown that Ssub(e) does not correlate significantly with Msub(B), (B-V) or type. This is consistent with the notion that there is a heavy selection bias in favour of disc galaxies with that particular surface brightness which allows inclusion in the catalogue over the largest volume of space. (author)

  18. A spectral k-means approach to bright-field cell image segmentation.

    Science.gov (United States)

    Bradbury, Laura; Wan, Justin W L

    2010-01-01

    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  19. An Evaluation of Recent Pick-up Point Experiments in European Cities: the Rise of two Competing Models?

    OpenAIRE

    AUGEREAU, V; DABLANC, L

    2007-01-01

    In this paper, we present an analysis of recent collection point/lockerbank experiments in Europe, including the history of some of the most notable experiments. Two 'models' are currently quite successful (Kiala relay points in France and Packstation locker banks in Germany), although they are quite different. As a first interpretation of these results, we propose that these two models be considered as complementary to one another.

  20. Modeling hydrology, groundwater recharge and non-point nitrate loadings in the Himalayan Upper Yamuna basin.

    Science.gov (United States)

    Narula, Kapil K; Gosain, A K

    2013-12-01

    The mountainous Himalayan watersheds are important hydrologic systems responsible for much of the water supply in the Indian sub-continent. These watersheds are increasingly facing anthropogenic and climate-related pressures that impact spatial and temporal distribution of water availability. This study evaluates temporal and spatial distribution of water availability including groundwater recharge and quality (non-point nitrate loadings) for a Himalayan watershed, namely, the Upper Yamuna watershed (part of the Ganga River basin). The watershed has an area of 11,600 km(2) with elevation ranging from 6300 to 600 m above mean sea level. Soil and Water Assessment Tool (SWAT), a physically-based, time-continuous model, has been used to simulate the land phase of the hydrological cycle, to obtain streamflows, groundwater recharge, and nitrate (NO3) load distributions in various components of runoff. The hydrological SWAT model is integrated with the MODular finite difference groundwater FLOW model (MODFLOW), and Modular 3-Dimensional Multi-Species Transport model (MT3DMS), to obtain groundwater flow and NO3 transport. Validation of various modules of this integrated model has been done for sub-basins of the Upper Yamuna watershed. Results on surface runoff and groundwater levels obtained as outputs from simulation show a good comparison with the observed streamflows and groundwater levels (Nash-Sutcliffe and R(2) correlations greater than +0.7). Nitrate loading obtained after nitrification, denitrification, and NO3 removal from unsaturated and shallow aquifer zones is combined with groundwater recharge. Results for nitrate modeling in groundwater aquifers are compared with observed NO3 concentration and are found to be in good agreement. The study further evaluates the sensitivity of water availability to climate change. Simulations have been made with the weather inputs of climate change scenarios of A2, B2, and A1B for end of the century. Water yield estimates under