A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Börcs
2012-07-01
Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.
Marked point process for modelling seismic activity (case study in Sumatra and Java)
Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.
2018-05-01
Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger
We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...
Fingerprint Analysis with Marked Point Processes
DEFF Research Database (Denmark)
Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper
We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....
Energy risk management through self-exciting marked point process
International Nuclear Information System (INIS)
Herrera, Rodrigo
2013-01-01
Crude oil is a dynamically traded commodity that affects many economies. We propose a collection of marked self-exciting point processes with dependent arrival rates for extreme events in oil markets and related risk measures. The models treat the time among extreme events in oil markets as a stochastic process. The main advantage of this approach is its capability to capture the short, medium and long-term behavior of extremes without involving an arbitrary stochastic volatility model or a prefiltration of the data, as is common in extreme value theory applications. We make use of the proposed model in order to obtain an improved estimate for the Value at Risk in oil markets. Empirical findings suggest that the reliability and stability of Value at Risk estimates improve as a result of finer modeling approach. This is supported by an empirical application in the representative West Texas Intermediate (WTI) and Brent crude oil markets. - Highlights: • We propose marked self-exciting point processes for extreme events in oil markets. • This approach captures the short and long-term behavior of extremes. • We improve the estimates for the VaR in the WTI and Brent crude oil markets
A Marked Point Process Framework for Extracellular Electrical Potentials
Directory of Open Access Journals (Sweden)
Carlos A. Loza
2017-12-01
Full Text Available Neuromodulations are an important component of extracellular electrical potentials (EEP, such as the Electroencephalogram (EEG, Electrocorticogram (ECoG and Local Field Potentials (LFP. This spatially temporal organized multi-frequency transient (phasic activity reflects the multiscale spatiotemporal synchronization of neuronal populations in response to external stimuli or internal physiological processes. We propose a novel generative statistical model of a single EEP channel, where the collected signal is regarded as the noisy addition of reoccurring, multi-frequency phasic events over time. One of the main advantages of the proposed framework is the exceptional temporal resolution in the time location of the EEP phasic events, e.g., up to the sampling period utilized in the data collection. Therefore, this allows for the first time a description of neuromodulation in EEPs as a Marked Point Process (MPP, represented by their amplitude, center frequency, duration, and time of occurrence. The generative model for the multi-frequency phasic events exploits sparseness and involves a shift-invariant implementation of the clustering technique known as k-means. The cost function incorporates a robust estimation component based on correntropy to mitigate the outliers caused by the inherent noise in the EEP. Lastly, the background EEP activity is explicitly modeled as the non-sparse component of the collected signal to further improve the delineation of the multi-frequency phasic events in time. The framework is validated using two publicly available datasets: the DREAMS sleep spindles database and one of the Brain-Computer Interface (BCI competition datasets. The results achieve benchmark performance and provide novel quantitative descriptions based on power, event rates and timing in order to assess behavioral correlates beyond the classical power spectrum-based analysis. This opens the possibility for a unifying point process framework of
RAPID INSPECTION OF PAVEMENT MARKINGS USING MOBILE LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
H. Zhang
2016-06-01
Full Text Available This study aims at building a robust semi-automated pavement marking extraction workflow based on the use of mobile LiDAR point clouds. The proposed workflow consists of three components: preprocessing, extraction, and classification. In preprocessing, the mobile LiDAR point clouds are converted into the radiometrically corrected intensity imagery of the road surface. Then the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu’s thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters using a manually defined decision tree. Case studies are conducted using the mobile LiDAR dataset acquired in Xiamen (Fujian, China with different road environments by the RIEGL VMX-450 system. The results demonstrated that the proposed workflow and our software tool can achieve 93% in completeness, 95% in correctness, and 94% in F-score when using Xiamen dataset.
Directory of Open Access Journals (Sweden)
Adam J Weis
Full Text Available To evaluate myofibroblast differentiation as an etiology of haze at the graft-host interface in a cat model of Descemet's Stripping Automated Endothelial Keratoplasty (DSAEK.DSAEK was performed on 10 eyes of 5 adult domestic short-hair cats. In vivo corneal imaging with slit lamp, confocal, and optical coherence tomography (OCT were performed twice weekly. Cats were sacrificed and corneas harvested 4 hours, and 2, 4, 6, and 9 days post-DSAEK. Corneal sections were stained with the TUNEL method and immunohistochemistry was performed for α-smooth muscle actin (α-SMA and fibronectin with DAPI counterstain.At all in vivo imaging time-points, corneal OCT revealed an increase in backscatter of light and confocal imaging revealed an acellular zone at the graft-host interface. At all post-mortem time-points, immunohistochemistry revealed a complete absence of α-SMA staining at the graft-host interface. At 4 hours, extracellular fibronectin staining was identified along the graft-host interface and both fibronectin and TUNEL assay were positive within adjacent cells extending into the host stroma. By day 2, fibronectin and TUNEL staining diminished and a distinct acellular zone was present in the region of previously TUNEL-positive cells.OCT imaging consistently showed increased reflectivity at the graft-host interface in cat corneas in the days post-DSAEK. This was not associated with myofibroblast differentiation at the graft-host interface, but rather with apoptosis and the development of a subsequent acellular zone. The roles of extracellular matrix changes and keratocyte cell death and repopulation should be investigated further as potential contributors to the interface optical changes.
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
THE MARK I BUSINESS SYSTEM SIMULATION MODEL
of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)
Witnesses to the truth: Mark's point of view
African Journals Online (AJOL)
2016-08-12
Aug 12, 2016 ... given to the role, function and rhetorical impact of point of view. It is argued that ... his point of view.6. Because the voice from heaven addressed Jesus directly, ..... τῶν Φαρισαίων) see that Jesus is sitting with the tax collectors.
Tournaire, O.; Paparoditis, N.
Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our
Definition of distance for nonlinear time series analysis of marked point process data
Energy Technology Data Exchange (ETDEWEB)
Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2017-01-30
Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.
Edit distance for marked point processes revisited: An implementation by binary integer programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2015-12-15
We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process is large.
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed
International Nuclear Information System (INIS)
Kimpland, R.H.
1996-01-01
A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented
Marked point process framework for living probabilistic safety assessment and risk follow-up
International Nuclear Information System (INIS)
Arjas, Elja; Holmberg, Jan
1995-01-01
We construct a model for living probabilistic safety assessment (PSA) by applying the general framework of marked point processes. The framework provides a theoretically rigorous approach for considering risk follow-up of posterior hazards. In risk follow-up, the hazard of core damage is evaluated synthetically at time points in the past, by using some observed events as logged history and combining it with re-evaluated potential hazards. There are several alternatives for doing this, of which we consider three here, calling them initiating event approach, hazard rate approach, and safety system approach. In addition, for a comparison, we consider a core damage hazard arising in risk monitoring. Each of these four definitions draws attention to a particular aspect in risk assessment, and this is reflected in the behaviour of the consequent risk importance measures. Several alternative measures are again considered. The concepts and definitions are illustrated by a numerical example
A marked point process approach for identifying neural correlates of tics in Tourette Syndrome.
Loza, Carlos A; Shute, Jonathan B; Principe, Jose C; Okun, Michael S; Gunduz, Aysegul
2017-07-01
We propose a novel interpretation of local field potentials (LFP) based on a marked point process (MPP) framework that models relevant neuromodulations as shifted weighted versions of prototypical temporal patterns. Particularly, the MPP samples are categorized according to the well known oscillatory rhythms of the brain in an effort to elucidate spectrally specific behavioral correlates. The result is a transient model for LFP. We exploit data-driven techniques to fully estimate the model parameters with the added feature of exceptional temporal resolution of the resulting events. We utilize the learned features in the alpha and beta bands to assess correlations to tic events in patients with Tourette Syndrome (TS). The final results show stronger coupling between LFP recorded from the centromedian-paraficicular complex of the thalamus and the tic marks, in comparison to electrocorticogram (ECoG) recordings from the hand area of the primary motor cortex (M1) in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve.
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR
International Nuclear Information System (INIS)
Gao, Yang; Zhong, Ruofei; Liu, Xianlin; Tang, Tao; Wang, Liuzhao
2017-01-01
Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness ( p ) and completeness ( r ) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR. (paper)
Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR
Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin
2017-08-01
Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.
Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian
2018-04-01
The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.
Degradation Modeling of Polyurea Pavement Markings
2011-03-01
34 Highly Reflective Elements ( HRE ) Model...Figure 4-6: Initial HRE Model Residuals ......................................................................... 36 Figure 4-7: HRE Model...Histogram .................................................................................. 38 Figure 4-8: HRE Model Residuals
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
Marked QTc prolongation and Torsades de Pointes in patients with chronic inflammatory arthritis
Directory of Open Access Journals (Sweden)
Pietro Enea Lazzerini
2016-09-01
Full Text Available Mounting evidence indicates that in chronic inflammatory arthritis (CIA, QTc prolongation is frequent and correlates with systemic inflammatory activation. Notably, basic studies demonstrated that inflammatory cytokines induce profound changes in potassium and calcium channels resulting in a prolonging effect on cardiomyocyte action potential duration (APD, thus on the QT interval on the electrocardiogram. Moreover, it has been demonstrated that in RA patients the risk of SCD is significantly increased when compared to non-RA subjects. Conversely, to date no data are available about Torsades de Pointes (TdP prevalence in CIA, and the few case reported considered CIA only an incidental concomitant disease, not contributing factor to TdP development.We report three patients with active CIA developing marked QTc prolongation, in two cases complicated with TdP degenerating to cardiac arrest. In these patients, a blood sample was obtained within 24h from TdP/marked QTc prolongation occurrence and levels of IL-6, TNF-alpha and IL-1 were evaluated. In all three cases, IL-6 was markedly elevated, ~10 to 100 times more than reference values. Moreover, one patient also showed high circulating levels of TNF-alpha and IL-1. In conclusion, active CIA may represent a currently overlooked QT-prolonging risk factor, potentially contributing in the presence of other classical risk factors to TdP occurrence. In particular, a relevant role may be played by elevated circulating IL-6 levels via direct electrophysiological effects on the heart. This observation should be carefully kept in mind, particularly when recognizable risk factors are already present and/or the addition of QT-prolonging drugs is required.
EcoMark: Evaluating Models of Vehicular Environmental Impact
DEFF Research Database (Denmark)
Guo, Chenjuan; Ma, Mike; Yang, Bin
2012-01-01
The reduction of greenhouse gas (GHG) emissions from transporta- tion is essential for achieving politically agreed upon emissions re- duction targets that aim to combat global climate change. So-called eco-routing and eco-driving are able to substantially reduce GHG emissions caused by vehicular...... the vehicle travels in. We develop an evaluation framework, called EcoMark, for such environmental impact models. In addition, we survey all eleven state-of-the-art impact models known to us. To gain insight into the capabilities of the models and to understand the effectiveness of the EcoMark, we apply...
Model plant Key Measurement Points
International Nuclear Information System (INIS)
Schneider, R.A.
1984-01-01
For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course
Weak convergence of marked point processes generated by crossings of multivariate jump processes
DEFF Research Database (Denmark)
Tamborrino, Massimiliano; Sacerdote, Laura; Jacobsen, Martin
2014-01-01
We consider the multivariate point process determined by the crossing times of the components of a multivariate jump process through a multivariate boundary, assuming to reset each component to an initial value after its boundary crossing. We prove that this point process converges weakly...... process converging to a multivariate Ornstein–Uhlenbeck process is discussed as a guideline for applying diffusion limits for jump processes. We apply our theoretical findings to neural network modeling. The proposed model gives a mathematical foundation to the generalization of the class of Leaky...
Hyperglycemia and subsequent torsades de pointes with marked QT prolongation during refeeding.
Nakashima, Takashi; Kubota, Tomoki; Takasugi, Nobuhiro; Kitagawa, Yuichiro; Yoshida, Takahiro; Ushikoshi, Hiroaki; Kawasaki, Masanori; Nishigaki, Kazuhiko; Ogura, Shinji; Minatoguchi, Shinya
2017-01-01
A fatal cardiac complication can occasionally present in malnourished patients during refeeding; this is known as refeeding syndrome. However, to our knowledge, hyperglycemia preceding torsades de pointes with QT prolongation during refeeding has not been reported. In the present study, we present a case in which hyperglycemia preceded torsades de pointes with QT prolongation during refeeding. The aim of this study was to determine the possible mechanism underlying QT prolongation during refeeding and indicate how to prevent it. A 32-y-old severely malnourished woman (body mass index 14.57 kg/m 2 ) was admitted to the intensive care unit of our institution after resuscitation from cardiopulmonary arrest due to ventricular fibrillation. She was diagnosed with anorexia nervosa. Although no obvious electrolyte abnormalities were observed, her blood glucose level was 11 mg/dL. A 12-lead electrocardiogram at admission showed sinus rhythm with normal QT interval (QTc 0.448). Forty mL of 50% glucose (containing 20 g of glucose) was intravenously injected, followed by a drip infusion of glucose to maintain blood glucose level within normal range. After 9 h, the patient's blood glucose level increased to 569 mg/dL. However, after 38 h, an episode of marked QT prolongation (QTc 0.931) followed by torsades de pointes developed. Hyperglycemia during refeeding can present with QT prolongation; consequently, monitoring blood glucose levels may be useful in avoiding hyperglycemia, which can result in QT prolongation. Furthermore, additional monitoring of QT intervals using a 12-lead electrocardiogram should allow the early detection of QT prolongation when glucose solution is administered to a malnourished patient with (severe) hypoglycemia. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Multi-Regge kinematics and the moduli space of Riemann spheres with marked points
Energy Technology Data Exchange (ETDEWEB)
Duca, Vittorio Del [Institute for Theoretical Physics, ETH Zürich,Hönggerberg, 8093 Zürich (Switzerland); Druc, Stefan; Drummond, James [School of Physics & Astronomy, University of Southampton,Highfield, Southampton, SO17 1BJ (United Kingdom); Duhr, Claude [Theoretical Physics Department, CERN,Route de Meyrin, CH-1211 Geneva 23 (Switzerland); Center for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain,Chemin du Cyclotron 2, 1348 Louvain-La-Neuve (Belgium); Dulat, Falko [SLAC National Accelerator Laboratory, Stanford University,Stanford, CA 94309 (United States); Marzucca, Robin [Center for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain,Chemin du Cyclotron 2, 1348 Louvain-La-Neuve (Belgium); Papathanasiou, Georgios [SLAC National Accelerator Laboratory, Stanford University,Stanford, CA 94309 (United States); Verbeek, Bram [Center for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain,Chemin du Cyclotron 2, 1348 Louvain-La-Neuve (Belgium)
2016-08-25
We show that scattering amplitudes in planar N=4 Super Yang-Mills in multi-Regge kinematics can naturally be expressed in terms of single-valued iterated integrals on the moduli space of Riemann spheres with marked points. As a consequence, scattering amplitudes in this limit can be expressed as convolutions that can easily be computed using Stokes’ theorem. We apply this framework to MHV amplitudes to leading-logarithmic accuracy (LLA), and we prove that at L loops all MHV amplitudes are determined by amplitudes with up to L+4 external legs. We also investigate non-MHV amplitudes, and we show that they can be obtained by convoluting the MHV results with a certain helicity flip kernel. We classify all leading singularities that appear at LLA in the Regge limit for arbitrary helicity configurations and any number of external legs. Finally, we use our new framework to obtain explicit analytic results at LLA for all MHV amplitudes up to five loops and all non-MHV amplitudes with up to eight external legs and four loops.
Smooth random change point models.
van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E
2011-03-15
Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Holmberg, J.
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant
Energy Technology Data Exchange (ETDEWEB)
Holmberg, J [VTT Automation, Espoo (Finland)
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.
Aftershock identification problem via the nearest-neighbor analysis for marked point processes
Gabrielov, A.; Zaliapin, I.; Wong, H.; Keilis-Borok, V.
2007-12-01
The centennial observations on the world seismicity have revealed a wide variety of clustering phenomena that unfold in the space-time-energy domain and provide most reliable information about the earthquake dynamics. However, there is neither a unifying theory nor a convenient statistical apparatus that would naturally account for the different types of seismic clustering. In this talk we present a theoretical framework for nearest-neighbor analysis of marked processes and obtain new results on hierarchical approach to studying seismic clustering introduced by Baiesi and Paczuski (2004). Recall that under this approach one defines an asymmetric distance D in space-time-energy domain such that the nearest-neighbor spanning graph with respect to D becomes a time- oriented tree. We demonstrate how this approach can be used to detect earthquake clustering. We apply our analysis to the observed seismicity of California and synthetic catalogs from ETAS model and show that the earthquake clustering part is statistically different from the homogeneous part. This finding may serve as a basis for an objective aftershock identification procedure.
Spatial Stochastic Point Models for Reservoir Characterization
Energy Technology Data Exchange (ETDEWEB)
Syversveen, Anne Randi
1997-12-31
The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.
Modelling point patterns with linear structures
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
2009-01-01
processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....
Modelling point patterns with linear structures
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....
A marked correlation function for constraining modified gravity models
White, Martin
2016-11-01
Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.
A marked correlation function for constraining modified gravity models
Energy Technology Data Exchange (ETDEWEB)
White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)
2016-11-01
Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.
NSGIC State | GIS Inventory — Geodetic Control Points dataset current as of 1995. Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level...
MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
F. Poux
2017-10-01
Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
Model for Semantically Rich Point Cloud Data
Poux, F.; Neuville, R.; Hallot, P.; Billen, R.
2017-10-01
This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
Spatially explicit models for inference about density in unmarked or partially marked populations
Chandler, Richard B.; Royle, J. Andrew
2013-01-01
Recently developed spatial capture–recapture (SCR) models represent a major advance over traditional capture–recapture (CR) models because they yield explicit estimates of animal density instead of population size within an unknown area. Furthermore, unlike nonspatial CR methods, SCR models account for heterogeneity in capture probability arising from the juxtaposition of animal activity centers and sample locations. Although the utility of SCR methods is gaining recognition, the requirement that all individuals can be uniquely identified excludes their use in many contexts. In this paper, we develop models for situations in which individual recognition is not possible, thereby allowing SCR concepts to be applied in studies of unmarked or partially marked populations. The data required for our model are spatially referenced counts made on one or more sample occasions at a collection of closely spaced sample units such that individuals can be encountered at multiple locations. Our approach includes a spatial point process for the animal activity centers and uses the spatial correlation in counts as information about the number and location of the activity centers. Camera-traps, hair snares, track plates, sound recordings, and even point counts can yield spatially correlated count data, and thus our model is widely applicable. A simulation study demonstrated that while the posterior mean exhibits frequentist bias on the order of 5–10% in small samples, the posterior mode is an accurate point estimator as long as adequate spatial correlation is present. Marking a subset of the population substantially increases posterior precision and is recommended whenever possible. We applied our model to avian point count data collected on an unmarked population of the northern parula (Parula americana) and obtained a density estimate (posterior mode) of 0.38 (95% CI: 0.19–1.64) birds/ha. Our paper challenges sampling and analytical conventions in ecology by demonstrating
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method
Ash accumulation effects using bench marked 0-D model
International Nuclear Information System (INIS)
Hu, S.C.; Guo, J.P.; Miley, G.H.
1990-01-01
Ash accumulation is a key issue relative to our ability to achieve D- 3 He ARIES III burn conditions. 1-1/2-d transport simulations using the BALDUR code have been used to examine the correlation between the global ash particle confinement time and the edge exhaust (or recycling) efficiency. This provides a way to benchmark the widely used 0-D model. The burn conditions for an ARIES-III plasma with various ash edge recycling coefficients are examined
Two-point model for divertor transport
International Nuclear Information System (INIS)
Galambos, J.D.; Peng, Y.K.M.
1984-04-01
Plasma transport along divertor field lines was investigated using a two-point model. This treatment requires considerably less effort to find solutions to the transport equations than previously used one-dimensional (1-D) models and is useful for studying general trends. It also can be a valuable tool for benchmarking more sophisticated models. The model was used to investigate the possibility of operating in the so-called high density, low temperature regime
Modelling occupants’ heating set-point prefferences
DEFF Research Database (Denmark)
Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn
2011-01-01
consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....
Comparison of sparse point distribution models
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus
2010-01-01
This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...
Determinantal point process models on the sphere
DEFF Research Database (Denmark)
Møller, Jesper; Nielsen, Morten; Porcu, Emilio
defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....
2016-04-01
AND ROTORCRAFT FROM DISCRETE -POINT LINEAR MODELS Eric L. Tobias and Mark B. Tischler Aviation Development Directorate Aviation and Missile...Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete -Point Linear Models 5...of discrete -point linear models and trim data. The model stitching simulation architecture is applicable to any aircraft configuration readily
2011-02-07
... Airworthiness Directives; Fokker Services B.V. Model F.28 Mark 0100, 1000, 2000, 3000, and 4000 Airplanes AGENCY.... Applicability (c) This AD applies to Fokker Services B.V. Model F.28 Mark 1000, 2000, 3000, and 4000 airplanes... April 20, 2010 (for Model F.28 Mark 1000, 2000, 3000, and 4000 airplanes); or SBF100-28-063, dated April...
2010-11-19
....V. Model F.28 Mark 0100, 1000, 2000, 3000, and 4000 Airplanes AGENCY: Federal Aviation... Fokker Services B.V. Model F28 Mark 1000, 2000, 3000, and 4000 airplanes, all serial numbers, equipped... Instructions of Fokker Service Bulletin SBF28-28-052, dated April 20, 2010 (for Model F28 Mark 1000, 2000, 3000...
Zero-point energy in bag models
International Nuclear Information System (INIS)
Milton, K.A.
1979-01-01
The zero-point (Casimir) energy of free vector (gluon) fields confined to a spherical cavity (bag) is computed. With a suitable renormalization the result for eight gluons is E = + 0.51/a. This result is substantially larger than that for a spherical shell (where both interior and exterior modes are present), and so affects Johnson's model of the QCD vacuum. It is also smaller than, and of opposite sign to, the value used in bag model phenomenology, so it will have important implications there. 1 figure
Venner, Mary Ann; Keshmiripour, Seti
2016-01-01
This article will describe how merging service points in an academic library is an opportunity to improve customer service and utilize staffing resources more efficiently. Combining service points provides libraries with the ability to create a more positive library experience for patrons by minimizing the ping-pong effect for assistance. The…
Cooper, Michael Townsend; Searing, Rapha A; Thompson, David M; Bard, David; Carabin, Hélène; Gonzales, Carlos; Zavala, Carmen; Woodson, Kyle; Naifeh, Monique
2017-01-01
Objectives: The World Health Organization's (WHO) recommendations list Peru as potentially needing prevention of soil-transmitted helminthiasis (STH). Prevalence of STH varies regionally and remains understudied in the newest informal settlements of the capital city, Lima. The purpose of this study was to evaluate the need for Mass Drug Administration (MDA) of antiparasitic drugs in the newest informal settlements of Lima. The aim of this study was to estimate the season-specific prevalence of STH to determine if these prevalence estimates met the WHO threshold for MDA in 3 informal settlements. Methods : A 2 time point cohort study was conducted among a sample of 140 children aged 1 to 10 years living in 3 purposively sampled informal settlements of Lima, Peru. Children were asked to provide 2 stool samples that were analyzed with the spontaneous sedimentation in tube technique. The season-specific prevalence proportions of MDA-targeted STH were estimated using a hidden (latent) Markov modeling approach to adjust for repeated measurements over the 2 seasons and the imperfect validity of the screening tests. Results : The prevalence of MDA targeted STH was low at 2.2% (95% confidence interval = 0.3% to 6%) and 3.8% (95% confidence interval = 0.7% to 9.3%) among children sampled in the summer and winter months, respectively, when using the most conservative estimate of test sensitivity. These estimates were below the WHO threshold for MDA (20%). Conclusions : Empiric treatment for STH by organizations active in the newest informal settlements is not supported by the data and could contribute to unnecessary medication exposures and poor allocation of resources.
MATHEMATICAL MODELING OF AC ELECTRIC POINT MOTOR
Directory of Open Access Journals (Sweden)
S. YU. Buryak
2014-03-01
Full Text Available Purpose. In order to ensure reliability, security, and the most important the continuity of the transportation process, it is necessary to develop, implement, and then improve the automated methods of diagnostic mechanisms, devices and rail transport systems. Only systems that operate in real time mode and transmit data on the instantaneous state of the control objects can timely detect any faults and thus provide additional time for their correction by railway employees. Turnouts are one of the most important and responsible components, and therefore require the development and implementation of such diagnostics system.Methodology. Achieving the goal of monitoring and control of railway automation objects in real time is possible only with the use of an automated process of the objects state diagnosing. For this we need to know the diagnostic features of a control object, which determine its state at any given time. The most rational way of remote diagnostics is the shape and current spectrum analysis that flows in the power circuits of railway automatics. Turnouts include electric motors, which are powered by electric circuits, and the shape of the current curve depends on both the condition of the electric motor, and the conditions of the turnout maintenance. Findings. For the research and analysis of AC electric point motor it was developed its mathematical model. The calculation of parameters and interdependencies between the main factors affecting the operation of the asynchronous machine was conducted. The results of the model operation in the form of time dependences of the waveform curves of current on the load on engine shaft were obtained. Originality. During simulation the model of AC electric point motor, which satisfies the conditions of adequacy was built. Practical value. On the basis of the constructed model we can study the AC motor in various mode of operation, record and analyze current curve, as a response to various changes
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly
Exact 2-point function in Hermitian matrix model
International Nuclear Information System (INIS)
Morozov, A.; Shakirov, Sh.
2009-01-01
J. Harer and D. Zagier have found a strikingly simple generating function [1,2] for exact (all-genera) 1-point correlators in the Gaussian Hermitian matrix model. In this paper we generalize their result to 2-point correlators, using Toda integrability of the model. Remarkably, this exact 2-point correlation function turns out to be an elementary function - arctangent. Relation to the standard 2-point resolvents is pointed out. Some attempts of generalization to 3-point and higher functions are described.
BWR Mark III containment analyses using a GOTHIC 8.0 3D model
International Nuclear Information System (INIS)
Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat
2015-01-01
Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences
Diffraction-based overlay measurement on dedicated mark using rigorous modeling method
Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang
2012-03-01
Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.
Modeling fixation locations using spatial point processes.
Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix
2013-10-01
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters
Song, S. G.; Dalguer, L. A.; Mai, Paul Martin
2013-01-01
statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point
Polanco, Michael A.; Kellas, Sotiris; Jackson, Karen
2009-01-01
The performance of material models to simulate a novel composite honeycomb Deployable Energy Absorber (DEA) was evaluated using the nonlinear explicit dynamic finite element code LS-DYNA(Registered TradeMark). Prototypes of the DEA concept were manufactured using a Kevlar/Epoxy composite material in which the fibers are oriented at +/-45 degrees with respect to the loading axis. The development of the DEA has included laboratory tests at subcomponent and component levels such as three-point bend testing of single hexagonal cells, dynamic crush testing of single multi-cell components, and impact testing of a full-scale fuselage section fitted with a system of DEA components onto multi-terrain environments. Due to the thin nature of the cell walls, the DEA was modeled using shell elements. In an attempt to simulate the dynamic response of the DEA, it was first represented using *MAT_LAMINATED_COMPOSITE_FABRIC, or *MAT_58, in LS-DYNA. Values for each parameter within the material model were generated such that an in-plane isotropic configuration for the DEA material was assumed. Analytical predictions showed that the load-deflection behavior of a single-cell during three-point bending was within the range of test data, but predicted the DEA crush response to be very stiff. In addition, a *MAT_PIECEWISE_LINEAR_PLASTICITY, or *MAT_24, material model in LS-DYNA was developed, which represented the Kevlar/Epoxy composite as an isotropic elastic-plastic material with input from +/-45 degrees tensile coupon data. The predicted crush response matched that of the test and localized folding patterns of the DEA were captured under compression, but the model failed to predict the single-cell three-point bending response.
Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters
Song, S. G.
2013-12-24
Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.
Sand Point, Alaska Coastal Digital Elevation Model
National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...
Sand Point, Alaska Tsunami Forecast Grids for MOST Model
National Oceanic and Atmospheric Administration, Department of Commerce — The Sand Point, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....
Toke Point, Washington Tsunami Forecast Grids for MOST Model
National Oceanic and Atmospheric Administration, Department of Commerce — The Toke Point, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....
A classification of marked hijaiyah letters' pronunciation using hidden Markov model
Wisesty, Untari N.; Mubarok, M. Syahrul; Adiwijaya
2017-08-01
Hijaiyah letters are the letters that arrange the words in Al Qur'an consisting of 28 letters. They symbolize the consonant sounds. On the other hand, the vowel sounds are symbolized by harokat/marks. Speech recognition system is a system used to process the sound signal to be data so that it can be recognized by computer. To build the system, some stages are needed i.e characteristics/feature extraction and classification. In this research, LPC and MFCC extraction method, K-Means Quantization vector and Hidden Markov Model classification are used. The data used are the 28 letters and 6 harakat with the total class of 168. After several are testing done, it can be concluded that the system can recognize the pronunciation pattern of marked hijaiyah letter very well in the training data with its highest accuracy of 96.1% using the feature of LPC extraction and 94% using the MFCC. Meanwhile, when testing system is used, the accuracy decreases up to 41%.
Chan, Stephen C Y; Karczmarski, Leszek
2017-01-01
Indo-Pacific humpback dolphins (Sousa chinensis) inhabiting Hong Kong waters are thought to be among the world's most anthropogenically impacted coastal delphinids. We have conducted a 5-year (2010-2014) photo-ID study and performed the first in this region comprehensive mark-recapture analysis applying a suite of open population models and robust design models. Cormack-Jolly-Seber (CJS) models suggested a significant transient effect and seasonal variation in apparent survival probabilities as result of a fluid movement beyond the study area. Given the spatial restrictions of our study, limited by an administrative border, if emigration was to be considered negligible the estimated survival rate of adults was 0.980. Super-population estimates indicated that at least 368 dolphins used Hong Kong waters as part of their range. Closed robust design models suggested an influx of dolphins from winter to summer and increased site fidelity in summer; and outflux, although less prominent, during summer-winter intervals. Abundance estimates in summer (N = 144-231) were higher than that in winter (N = 87-111), corresponding to the availability of prey resources which in Hong Kong waters peaks during summer months. We point out that the current population monitoring strategy used by the Hong Kong authorities is ill-suited for a timely detection of a population change and should be revised.
Directory of Open Access Journals (Sweden)
Stephen C Y Chan
Full Text Available Indo-Pacific humpback dolphins (Sousa chinensis inhabiting Hong Kong waters are thought to be among the world's most anthropogenically impacted coastal delphinids. We have conducted a 5-year (2010-2014 photo-ID study and performed the first in this region comprehensive mark-recapture analysis applying a suite of open population models and robust design models. Cormack-Jolly-Seber (CJS models suggested a significant transient effect and seasonal variation in apparent survival probabilities as result of a fluid movement beyond the study area. Given the spatial restrictions of our study, limited by an administrative border, if emigration was to be considered negligible the estimated survival rate of adults was 0.980. Super-population estimates indicated that at least 368 dolphins used Hong Kong waters as part of their range. Closed robust design models suggested an influx of dolphins from winter to summer and increased site fidelity in summer; and outflux, although less prominent, during summer-winter intervals. Abundance estimates in summer (N = 144-231 were higher than that in winter (N = 87-111, corresponding to the availability of prey resources which in Hong Kong waters peaks during summer months. We point out that the current population monitoring strategy used by the Hong Kong authorities is ill-suited for a timely detection of a population change and should be revised.
Self-exciting point process in modeling earthquake occurrences
International Nuclear Information System (INIS)
Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.
2017-01-01
In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)
The naming of gender-marked pronouns supports interactivity in models of lexical access
Directory of Open Access Journals (Sweden)
Albert Costa
2009-01-01
Full Text Available When a speaker names an object using a gender-marked pronominal form, the referent word corresponding to the target object has to be selected in order to access the grammatical gender. By contrast, the phonological content of the referent word is not needed. In two picture-naming interference experiments we explored whether the lexical selection of a referent word is affected by its phonological properties. In Experiment 1, Spanish participants named pictures using a sentence with a noun or a pronoun while ignoring words semantically or phonologically related. The results showed a semantic interference effect and a Phonological Facilitation Effect (PFE in both type of utterances. In Experiment 2 the PFE was replicated with Italian participants in a different pronominal utterance. The PFE suggests that the lexical selection of the referent word is facilitated by the presentation of a distractor word phonologically related. These findings are consistent with the predictions of interactive models of lexical access.
Dew Point modelling using GEP based multi objective optimization
Shroff, Siddharth; Dabhi, Vipul
2013-01-01
Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. We aim to use Gene Expression Programming for modelling of dew point. Generally, accuracy of the model is the only objective used by selection mechanism of GEP. This will evolve...
Development of a hydrogen diffusion gothic model of MARK III-containment
Energy Technology Data Exchange (ETDEWEB)
Hung, Zhen-Yu [National Tsing Hua Univ., Dept. of Engineering and System Science, Hsinchu, Taiwan (China); Huang, Yu-Kai; Pei, Bau-Shei [National Tsing Hua Univ., Inst. of Nuclear Engineering Science, Hsinchu, Taiwan (China); Hsu, Wen-Sheng [National Tsing Hua Univ., Nuclear Science and Technology Development Center, Hsinchu, Taiwan (China); Chen, Yen-Shu [Institute of Nuclear Energy Research, Nuclear Engineering Div., Taiyuan County, Taiwan (China)
2015-07-15
The accident that occurred at the Fukushima Daiichi Nuclear Power Plant is a reminder of the danger of hydrogen explosion within a reactor building. Sufficiently high hydrogen concentration may cause an explosion that could damage the structure, resulting in the release of radioisotopes into the environment. In the first part of this study, a gas diffusion experiment was performed, in which helium was used as the working fluid. An analytical model was also developed using the GOTHIC code and the model predictions of the helium distribution were found to be in good agreement with the experimentally measured data. In the second part of the study, a model of the Mark III containment of the Kuosheng Plant in Taiwan was developed, and was applied to a long-term station blackout (SBO) accident similar to that of the Fukushima plant. The hydrogen generation was calculated using the Modular Accident Analysis Program and was used as the boundary condition for the GOTHIC containment model. The simulation results revealed that the hydrogen concentration at the first floor of the wetwell in the containment reached 4 % 9.7 h after the accident. This indicated the possibility of dangerous conditions inside the containment. Although active hydrogen ignitors are already installed in the Kuosheng plant, the findings of this study indicate that it may be necessary to add passive recombiners to prolong an SBO event.
Development of a hydrogen diffusion gothic model of MARK III-containment
International Nuclear Information System (INIS)
Hung, Zhen-Yu; Huang, Yu-Kai; Pei, Bau-Shei; Hsu, Wen-Sheng; Chen, Yen-Shu
2015-01-01
The accident that occurred at the Fukushima Daiichi Nuclear Power Plant is a reminder of the danger of hydrogen explosion within a reactor building. Sufficiently high hydrogen concentration may cause an explosion that could damage the structure, resulting in the release of radioisotopes into the environment. In the first part of this study, a gas diffusion experiment was performed, in which helium was used as the working fluid. An analytical model was also developed using the GOTHIC code and the model predictions of the helium distribution were found to be in good agreement with the experimentally measured data. In the second part of the study, a model of the Mark III containment of the Kuosheng Plant in Taiwan was developed, and was applied to a long-term station blackout (SBO) accident similar to that of the Fukushima plant. The hydrogen generation was calculated using the Modular Accident Analysis Program and was used as the boundary condition for the GOTHIC containment model. The simulation results revealed that the hydrogen concentration at the first floor of the wetwell in the containment reached 4 % 9.7 h after the accident. This indicated the possibility of dangerous conditions inside the containment. Although active hydrogen ignitors are already installed in the Kuosheng plant, the findings of this study indicate that it may be necessary to add passive recombiners to prolong an SBO event.
Point Reyes, California Tsunami Forecast Grids for MOST Model
National Oceanic and Atmospheric Administration, Department of Commerce — The Point Reyes, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)...
Development and validation of a model TRIGA Mark III reactor with code MCNP5
International Nuclear Information System (INIS)
Galicia A, J.; Francois L, J. L.; Aguilar H, F.
2015-09-01
The main purpose of this paper is to obtain a model of the reactor core TRIGA Mark III that accurately represents the real operating conditions to 1 M Wth, using the Monte Carlo code MCNP5. To provide a more detailed analysis, different models of the reactor core were realized by simulating the control rods extracted and inserted in conditions in cold (293 K) also including an analysis for shutdown margin, so that satisfied the Operation Technical Specifications. The position they must have the control rods to reach a power equal to 1 M Wth, were obtained from practice entitled Operation in Manual Mode performed at Instituto Nacional de Investigaciones Nucleares (ININ). Later, the behavior of the K eff was analyzed considering different temperatures in the fuel elements, achieving calculate subsequently the values that best represent the actual reactor operation. Finally, the calculations in the developed model for to obtain the distribution of average flow of thermal, epithermal and fast neutrons in the six new experimental facilities are presented. (Author)
A hierarchical model exhibiting the Kosterlitz-Thouless fixed point
International Nuclear Information System (INIS)
Marchetti, D.H.U.; Perez, J.F.
1985-01-01
A hierarchical model for 2-d Coulomb gases displaying a line stable of fixed points describing the Kosterlitz-Thouless phase transition is constructed. For Coulomb gases corresponding to Z sub(N)- models these fixed points are stable for an intermediate temperature interval. (Author) [pt
Mental Models Theory and Military Decision-Marking: A Pilot Experimental Model
National Research Council Canada - National Science Library
Sparkes, Jason
2003-01-01
...) and in the military (e.g., the USS Vincennes incident). In particular, construction of the mental models used when making critical decisions is vulnerable to both problem complexity and logically conflicting (false) information...
Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.
2009-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.
IMAGE TO POINT CLOUD METHOD OF 3D-MODELING
Directory of Open Access Journals (Sweden)
A. G. Chibunichev
2012-07-01
Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
Identification of Influential Points in a Linear Regression Model
Directory of Open Access Journals (Sweden)
Jan Grosz
2011-03-01
Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.
DECOVALEX I - Bench-Mark Test 3: Thermo-hydro-mechanical modelling
International Nuclear Information System (INIS)
Israelsson, J.
1995-12-01
The bench-mark test concerns the excavation of a tunnel, located 500 m below the ground surface, and the establishment of mechanical equilibrium and steady-state fluid flow. Following this, a thermal heating due to the nuclear waste, stored in a borehole below the tunnel, was simulated. The results are reported at (1) 30 days after tunnel excavation, (2) steady state, (3) one year after thermal loading, and (4) at the time of maximum temperature. The problem specification included the excavation and waste geometry, materials properties for intact rock and joints, location of more than 6500 joints observed in the 50 by 50 m area, and calculated hydraulic conductivities. However, due to the large number of joints and the lack of dominating orientations, it was decided to treat the problem as a continuum using the computer code FLAC. The problem was modeled using a vertical symmetry plane through the tunnel and the borehole. Flow equilibrium was obtained approx. 40 days after the opening of the tunnel. Since the hydraulic conductivity was set to be stress dependent, a noticeable difference in the horizontal and vertical conductivity and flow was observed. After 40 days, an oedometer-type consolidation of the model was observed. Approx. 4 years after the initiation of the heat source, a maximum temperature of 171 C was obtained. The stress-dependent hydraulic conductivity and the temperature-dependent dynamic viscosity caused minor changes to the flow pattern. The specified mechanical boundary conditions imply that the tunnel is part of a system of parallel tunnels. However, the fixed temperature at the top boundary maintains the temperature below the temperature anticipated for an equivalent repository. The combination of mechanical and hydraulic boundary conditions cause the model to behave like an oedometer test in which the consolidation rate goes asymptotically to zero. 17 refs, 55 figs, 22 tabs
Four point functions in the SL(2,R) WZW model
Energy Technology Data Exchange (ETDEWEB)
Minces, Pablo [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina)]. E-mail: minces@iafe.uba.ar; Nunez, Carmen [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina) and Physics Department, University of Buenos Aires, Ciudad Universitaria, Pab. I, 1428 Buenos Aires (Argentina)]. E-mail: carmen@iafe.uba.ar
2007-04-19
We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions.
Four point functions in the SL(2,R) WZW model
International Nuclear Information System (INIS)
Minces, Pablo; Nunez, Carmen
2007-01-01
We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions
A two-point kinetic model for the PROTEUS reactor
International Nuclear Information System (INIS)
Dam, H. van.
1995-03-01
A two-point reactor kinetic model for the PROTEUS-reactor is developed and the results are described in terms of frequency dependent reactivity transfer functions for the core and the reflector. It is shown that at higher frequencies space-dependent effects occur which imply failure of the one-point kinetic model. In the modulus of the transfer functions these effects become apparent above a radian frequency of about 100 s -1 , whereas for the phase behaviour the deviation from a point model already starts at a radian frequency of 10 s -1 . (orig.)
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
Directory of Open Access Journals (Sweden)
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Accurate modeling and maximum power point detection of ...
African Journals Online (AJOL)
Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.
Two point function for a simple general relativistic quantum model
Colosi, Daniele
2007-01-01
We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Modeling of the positioning system and visual mark-up of historical cadastral maps
Directory of Open Access Journals (Sweden)
Tomislav Jakopec
2013-03-01
Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for
Accuracy limit of rigid 3-point water models
Izadi, Saeed; Onufriev, Alexey V.
2016-08-01
Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.
A model for transmission of the H3K27me3 epigenetic mark
DEFF Research Database (Denmark)
Hansen, Klaus H; Bracken, Adrian P; Pasini, Diego
2008-01-01
Organization of chromatin by epigenetic mechanisms is essential for establishing and maintaining cellular identity in developing and adult organisms. A key question that remains unresolved about this process is how epigenetic marks are transmitted to the next cell generation during cell division...... during incorporation of newly synthesized histones. This mechanism ensures maintenance of the H3K27me3 epigenetic mark in proliferating cells, not only during DNA replication when histones synthesized de novo are incorporated, but also outside S phase, thereby preserving chromatin structure...
Modeling hard clinical end-point data in economic analyses.
Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V
2013-11-01
The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are
Shape Modelling Using Markov Random Field Restoration of Point Correspondences
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen
2003-01-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...
From Point Cloud to Textured Model, the Zamani Laser Scanning ...
African Journals Online (AJOL)
roshan
meshed models based on dense points has received mixed reaction from the wide range of potential end users of the final ... data, can be subdivided into the stages of data acquisition, registration, data cleaning, modelling, hole filling ..... provide management tools for site management at local and regional level. The project ...
FINDING CUBOID-BASED BUILDING MODELS IN POINT CLOUDS
Directory of Open Access Journals (Sweden)
W. Nguatem
2012-07-01
Full Text Available In this paper, we present an automatic approach for the derivation of 3D building models of level-of-detail 1 (LOD 1 from point clouds obtained from (dense image matching or, for comparison only, from LIDAR. Our approach makes use of the predominance of vertical structures and orthogonal intersections in architectural scenes. After robustly determining the scene's vertical direction based on the 3D points we use it as constraint for a RANSAC-based search for vertical planes in the point cloud. The planes are further analyzed to segment reliable outlines for rectangular surface within these planes, which are connected to construct cuboid-based building models. We demonstrate that our approach is robust and effective over a range of real-world input data sets with varying point density, amount of noise, and outliers.
Fixed Points in Discrete Models for Regulatory Genetic Networks
Directory of Open Access Journals (Sweden)
Orozco Edusmildo
2007-01-01
Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.
New analytically solvable models of relativistic point interactions
International Nuclear Information System (INIS)
Gesztesy, F.; Seba, P.
1987-01-01
Two new analytically solvable models of relativistic point interactions in one dimension (being natural extensions of the nonrelativistic δ-resp, δ'-interaction) are considered. Their spectral properties in the case of finitely many point interactions as well as in the periodic case are fully analyzed. Moreover the spectrum is explicitely determined in the case of independent, identically distributed random coupling constants and the analog of the Saxon and Huther conjecture concerning gaps in the energy spectrum of such systems is derived
Modeling the contribution of point sources and non-point sources to Thachin River water pollution.
Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth
2009-08-15
Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao
2017-01-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...
A point particle model of lightly bound skyrmions
Directory of Open Access Journals (Sweden)
Mike Gillard
2017-04-01
Full Text Available A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1≤B≤8 obtained by numerical simulation of the full field theory. For 9≤B≤23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein–Rubinstein constraints, is devised.
Predicting acid dew point with a semi-empirical model
International Nuclear Information System (INIS)
Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu
2016-01-01
Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.
An Improved Nonlinear Five-Point Model for Photovoltaic Modules
Directory of Open Access Journals (Sweden)
Sakaros Bogning Dongue
2013-01-01
Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.
... completely without the help of a dermatologist or plastic surgeon. These doctors may use one of many types of treatments — from actual surgery to techniques like microdermabrasion and laser treatment — to reduce the appearance of stretch marks. These techniques are ...
Multi-Valued Modal Fixed Point Logics for Model Checking
Nishizawa, Koki
In this paper, I will show how multi-valued logics are used for model checking. Model checking is an automatic technique to analyze correctness of hardware and software systems. A model checker is based on a temporal logic or a modal fixed point logic. That is to say, a system to be checked is formalized as a Kripke model, a property to be satisfied by the system is formalized as a temporal formula or a modal formula, and the model checker checks that the Kripke model satisfies the formula. Although most existing model checkers are based on 2-valued logics, recently new attempts have been made to extend the underlying logics of model checkers to multi-valued logics. I will summarize these new results.
A case study on point process modelling in disease mapping
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor
2005-01-01
of the risk on the covariates. Instead of using the common areal level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo...... methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics....
Multivariate Product-Shot-noise Cox Point Process Models
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge
We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....
The quantum nonlinear Schroedinger model with point-like defect
International Nuclear Information System (INIS)
Caudrelier, V; Mintchev, M; Ragoucy, E
2004-01-01
We establish a family of point-like impurities which preserve the quantum integrability of the nonlinear Schroedinger model in 1+1 spacetime dimensions. We briefly describe the construction of the exact second quantized solution of this model in terms of an appropriate reflection-transmission algebra. The basic physical properties of the solution, including the spacetime symmetry of the bulk scattering matrix, are also discussed. (letter to the editor)
Czech Academy of Sciences Publication Activity Database
Low, P. A.; Sam, Kateřina; McArthur, C.; Posa, M. R. C.; Hochuli, D. F.
2014-01-01
Roč. 152, č. 2 (2014), s. 120-126 ISSN 0013-8703 R&D Projects: GA ČR GA206/09/0115; GA ČR GD206/08/H044 Grant - others:University of Sydney Animal Ethics(AU) L04/6-2012/3/5792 Institutional support: RVO:60077344 Keywords : artificial prey * attack marks * insect herbivores Subject RIV: EH - Ecology, Behaviour Impact factor: 1.616, year: 2014 http://onlinelibrary.wiley.com/doi/10.1111/eea.12207/pdf
Integrated modeling and analysis methodology for precision pointing applications
Gutierrez, Homero L.
2002-07-01
Space-based optical systems that perform tasks such as laser communications, Earth imaging, and astronomical observations require precise line-of-sight (LOS) pointing. A general approach is described for integrated modeling and analysis of these types of systems within the MATLAB/Simulink environment. The approach can be applied during all stages of program development, from early conceptual design studies to hardware implementation phases. The main objective is to predict the dynamic pointing performance subject to anticipated disturbances and noise sources. Secondary objectives include assessing the control stability, levying subsystem requirements, supporting pointing error budgets, and performing trade studies. The integrated model resides in Simulink, and several MATLAB graphical user interfaces (GUI"s) allow the user to configure the model, select analysis options, run analyses, and process the results. A convenient parameter naming and storage scheme, as well as model conditioning and reduction tools and run-time enhancements, are incorporated into the framework. This enables the proposed architecture to accommodate models of realistic complexity.
Comprehensive overview of the Point-by-Point model of prompt emission in fission
Energy Technology Data Exchange (ETDEWEB)
Tudora, A. [University of Bucharest, Faculty of Physics, Bucharest Magurele (Romania); Hambsch, F.J. [European Commission, Joint Research Centre, Directorate G - Nuclear Safety and Security, Unit G2, Geel (Belgium)
2017-08-15
The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for {sup 252}Cf(SF) and {sup 235}U(n,f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν(A,TKE) and γ-ray energy E{sub γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A (e.g., ν(A), E{sub γ}(A), left angle ε right angle (A) etc.), as a function of TKE (e.g., ν(TKE), E{sub γ}(TKE)) up to the prompt neutron distribution P(ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning
Recent tests of the equilibrium-point hypothesis (lambda model).
Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B
1998-07-01
The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.
The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV
Ho, Y.; Weber, J.
2017-12-01
WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.
Modeling molecular boiling points using computed interaction energies.
Peterangelo, Stephen C; Seybold, Paul G
2017-12-20
The noncovalent van der Waals interactions between molecules in liquids are typically described in textbooks as occurring between the total molecular dipoles (permanent, induced, or transient) of the molecules. This notion was tested by examining the boiling points of 67 halogenated hydrocarbon liquids using quantum chemically calculated molecular dipole moments, ionization potentials, and polarizabilities obtained from semi-empirical (AM1 and PM3) and ab initio Hartree-Fock [HF 6-31G(d), HF 6-311G(d,p)], and density functional theory [B3LYP/6-311G(d,p)] methods. The calculated interaction energies and an empirical measure of hydrogen bonding were employed to model the boiling points of the halocarbons. It was found that only terms related to London dispersion energies and hydrogen bonding proved significant in the regression analyses, and the performances of the models generally improved at higher levels of quantum chemical computation. An empirical estimate for the molecular polarizabilities was also tested, and the best models for the boiling points were obtained using either this empirical polarizability itself or the polarizabilities calculated at the B3LYP/6-311G(d,p) level, along with the hydrogen-bonding parameter. The results suggest that the cohesive forces are more appropriately described as resulting from highly localized interactions rather than interactions between the global molecular dipoles.
Energy Technology Data Exchange (ETDEWEB)
Lai, W.; McCauley, E.W.
1978-01-04
Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.
International Nuclear Information System (INIS)
Lai, W.; McCauley, E.W.
1978-01-01
Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90 0 torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this
Third generation masses from a two Higgs model fixed point
International Nuclear Information System (INIS)
Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.
1990-01-01
The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)
Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino
2018-02-22
CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.
International Nuclear Information System (INIS)
Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata
2010-01-01
Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)
Zirconium - ab initio modelling of point defects diffusion
International Nuclear Information System (INIS)
Gasca, Petrica
2010-01-01
Zirconium is the main element of the cladding found in pressurized water reactors, under an alloy form. Under irradiation, the cladding elongate significantly, phenomena attributed to the vacancy dislocation loops growth in the basal planes of the hexagonal compact structure. The understanding of the atomic scale mechanisms originating this process motivated this work. Using the ab initio atomic modeling technique we studied the structure and mobility of point defects in Zirconium. This led us to find four interstitial point defects with formation energies in an interval of 0.11 eV. The migration paths study allowed the discovery of activation energies, used as entry parameters for a kinetic Monte Carlo code. This code was developed for calculating the diffusion coefficient of the interstitial point defect. Our results suggest a migration parallel to the basal plane twice as fast as one parallel to the c direction, with an activation energy of 0.08 eV, independent of the direction. The vacancy diffusion coefficient, estimated with a two-jump model, is also anisotropic, with a faster process in the basal planes than perpendicular to them. Hydrogen influence on the vacancy dislocation loops nucleation was also studied, due to recent experimental observations of cladding growth acceleration in the presence of this element [fr
Dissipative N-point-vortex Models in the Plane
Shashikanth, Banavara N.
2010-02-01
A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.
The Benefits and Limitations of Hydraulic Modeling for Ordinary High Water Mark Delineation
2016-02-01
between two cross sections, the HEC-RAS model will not show it. If there is a sudden drop in the channel, such as a waterfall or steep rapids, the...ER D C/ CR RE L TR -1 6- 1 Wetland Regulatory Assistance Program (WRAP) The Benefits and Limitations of Hydraulic Modeling for Ordinary...client/default. Wetland Regulatory Assistance Program (WRAP) ERDC/CRREL TR-16-1 February 2016 The Benefits and Limitations of Hydraulic Modeling
a Modeling Method of Fluttering Leaves Based on Point Cloud
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A relativistic point coupling model for nuclear structure calculations
International Nuclear Information System (INIS)
Buervenich, T.; Maruhn, J.A.; Madland, D.G.; Reinhard, P.G.
2002-01-01
A relativistic point coupling model is discussed focusing on a variety of aspects. In addition to the coupling using various bilinear Dirac invariants, derivative terms are also included to simulate finite-range effects. The formalism is presented for nuclear structure calculations of ground state properties of nuclei in the Hartree and Hartree-Fock approximations. Different fitting strategies for the determination of the parameters have been applied and the quality of the fit obtainable in this model is discussed. The model is then compared more generally to other mean-field approaches both formally and in the context of applications to ground-state properties of known and superheavy nuclei. Perspectives for further extensions such as an exact treatment of the exchange terms using a higher-order Fierz transformation are discussed briefly. (author)
Self-Exciting Point Process Modeling of Conversation Event Sequences
Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
Abanmi, Abdullah A; Al Zouman, Abdulrahman Y; Al Hussaini, Husa; Al-Asmari, Abdulrahman
2002-07-01
Prayer marks (PMs) are asymptomatic, chronic skin changes that consist mainly of thickening, lichenification, and hyperpigmentation, and develop over a long period of time as a consequence of repeated, extended pressure on bony prominences during prayer. Three hundred and forty-nine Muslims and 24 non-Muslims were examined for the appearance of PMs at different body sites. The prospective study of 349 Muslims (both males and females) with regular praying habits showed the occurrence of PMs on specific locations, such as the forehead, knees, ankles, and dorsa of the feet, leading to dermatologic changes consisting of lichenification and hyperpigmentation. The incidence of PMs was significantly higher in males than in females. Older subjects (over 50 years of age) demonstrated a significantly higher frequency of lichenification and hyperpigmentation, suggesting that repeated pressure and friction for prolonged periods are the causative factors for the development of PMs. Histologic examination of skin biopsies from the affected sites showed compact orthokeratosis, hypergranulosis, dermal papillary fibrosis, and dermal vascularization. PMs were not associated with any risk of secondary complications, such as erythema, bullous formation, and infections. PMs are commonly occurring dermatologic changes in Muslims who pray for prolonged periods.
FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
Y. Sun
2016-06-01
Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Mark II pressure suppression containment systems: an analytical model of the pool swell phenomenon
International Nuclear Information System (INIS)
Ernst, R.J.; Ward, M.G.
1976-12-01
A one-dimensional pool swell model of the dynamic and thermodynamic conditions in the suppression chamber following a postulated loss-of-coolant accident (LOCA) is described. The pool swell phenomena is approximated by a constant thickness water slug, which is accelerated upward by the difference between the air bubble pressure acting below the pool and the wetwell air space pressure acting above the pool surface. The transient bubble pressure is computed using the known drywell pressure history and a quasi-steady compressible vent flow model. Comparisons of model predictions with pool swell experimental data are favorable and show the model is based on a conservative interpretation of the physical phenomena involved
International Nuclear Information System (INIS)
Ozdemir, Ozkan Emre; George, Thomas L.
2015-01-01
As a part of the GOTHIC (GOTHIC incorporates technology developed for the electric power industry under the sponsorship of EPRI.) Fukushima Technical Evaluation project (EPRI, 2014a, b, 2015), GOTHIC (EPRI, 2014c) has been benchmarked against test data for pool stratification (EPRI, 2014a, b, Ozdemir and George, 2013). These tests confirmed GOTHIC’s ability to simulate pool mixing and stratification under a variety of anticipated suppression pool operating conditions. The multidimensional modeling requires long simulation times for events that may occur over a period of hours or days. For these scenarios a lumped model of the pressure suppression chamber is desirable to maintain reasonable simulation times. However, a lumped model for the pool is not able to predict the effects of pool stratification that can influence the overall containment response. The main objective of this work is on the development of a correlation that can be used to estimate pool mixing and stratification effects in a lumped modeling approach. A simplified lumped GOTHIC model that includes a two zone model for the suppression pool with controlled circulation between the upper and lower zones was constructed. A pump and associated flow connections are included to provide mixing between the upper and lower pool volumes. Using numerically generated data from a multidimensional GOTHIC model for the suppression pool, a correlation was developed for the mixing rate between the upper and lower pool volumes in a two-zone, lumped model. The mixing rate depends on the pool subcooling, the steam injection rate and the injection depth
The Critical Point Entanglement and Chaos in the Dicke Model
Directory of Open Access Journals (Sweden)
Lina Bao
2015-07-01
Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.
Multiplicative point process as a model of trading activity
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Between tide and wave marks: a unifying model of physical zonation on littoral shores
Directory of Open Access Journals (Sweden)
Christopher E. Bird
2013-09-01
Full Text Available The effects of tides on littoral marine habitats are so ubiquitous that shorelines are commonly described as ‘intertidal’, whereas waves are considered a secondary factor that simply modifies the intertidal habitat. However mean significant wave height exceeds tidal range at many locations worldwide. Here we construct a simple sinusoidal model of coastal water level based on both tidal range and wave height. From the patterns of emergence and submergence predicted by the model, we derive four vertical shoreline benchmarks which bracket up to three novel, spatially distinct, and physically defined zones. The (1 emergent tidal zone is characterized by tidally driven emergence in air; the (2 wave zone is characterized by constant (not periodic wave wash; and the (3 submergent tidal zone is characterized by tidally driven submergence. The decoupling of tidally driven emergence and submergence made possible by wave action is a critical prediction of the model. On wave-dominated shores (wave height ≫ tidal range, all three zones are predicted to exist separately, but on tide-dominated shores (tidal range ≫ wave height the wave zone is absent and the emergent and submergent tidal zones overlap substantially, forming the traditional “intertidal zone”. We conclude by incorporating time and space in the model to illustrate variability in the physical conditions and zonation on littoral shores. The wave:tide physical zonation model is a unifying framework that can facilitate our understanding of physical conditions on littoral shores whether tropical or temperate, marine or lentic.
Directory of Open Access Journals (Sweden)
Rotella, J. J.
2004-06-01
Full Text Available Estimating nest success and evaluating factors potentially related to the survival rates of nests are key aspects of many studies of avian populations. A strong interest in nest success has led to a rich literature detailing a variety of estimation methods for this vital rate. In recent years, modeling approaches have undergone especially rapid development. Despite these advances, most researchers still employ Mayfield’s ad-hoc method (Mayfield, 1961 or, in some cases, the maximum-likelihood estimator of Johnson (1979 and Bart & Robson (1982. Such methods permit analyses of stratified data but do not allow for more complex and realistic models of nest survival rate that include covariates that vary by individual, nest age, time, etc. and that may be continuous or categorical. Methods that allow researchers to rigorously assess the importance of a variety of biological factors that might affect nest survival rates can now be readily implemented in Program MARK and in SAS’s Proc GENMOD and Proc NLMIXED. Accordingly, use of Mayfield’s estimator without first evaluating the need for more complex models of nest survival rate cannot be justified. With the goal of increasing the use of more flexible methods, we first describe the likelihood used for these models and then consider the question of what the effective sample size is for computation of AICc. Next, we consider the advantages and disadvantages of these different programs in terms of ease of data input and model construction; utility/flexibility of generated estimates and predictions; ease of model selection; and ability to estimate variance components. An example data set is then analyzed using both MARK and SAS to demonstrate implementation of the methods with various models that contain nest-, group- (or block-, and time-specific covariates. Finally, we discuss improvements that would, if they became available, promote a better general understanding of nest survival rates.
Matusevych, Yevgen; Alishahi, Afra; Backus, Albert
2017-01-01
Cross-linguistic influence (CLI) is one of the key phenomena in bilingual and second language learning. We propose a method for quantifying CLI in the use of linguistic constructions with the help of a computational model, which acquires constructions in two languages from bilingual input. We focus
Two-point model for electron transport in EBT
International Nuclear Information System (INIS)
Chiu, S.C.; Guest, G.E.
1980-01-01
The electron transport in EBT is simulated by a two-point model corresponding to the central plasma and the edge. The central plasma is assumed to obey neoclassical collisionless transport. The edge plasma is assumed turbulent and modeled by Bohm diffusion. The steady-state temperatures and densities in both regions are obtained as functions of neutral influx and microwave power. It is found that as the neutral influx decreases and power increases, the edge density decreases while the core density increases. We conclude that if ring instability is responsible for the T-M mode transition, and if stability is correlated with cold electron density at the edge, it will depend sensitively on ambient gas pressure and microwave power
A Thermodynamic Point of View on Dark Energy Models
Directory of Open Access Journals (Sweden)
Vincenzo F. Cardone
2017-07-01
Full Text Available We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios.
Two-point functions in a holographic Kondo model
Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.
2017-03-01
We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.
SALLY, Dynamic Behaviour of Reactor Cooling Channel by Point Model
International Nuclear Information System (INIS)
Reiche, Chr.; Ziegenbein, D.
1981-01-01
1 - Nature of the physical problem solved: The dynamical behaviour of a cooling channel is calculated. Starting from an equilibrium state a perturbation is introduced into the system. That may be an outer reactivity perturbation or a change in the coolant velocity or in the coolant temperature. The neutron kinetics is treated in the framework of the one-point model. The cooling channel consists of a cladded and cooled fuel rod. The temperature distribution is taken into account as an array above a mesh of radial zones and axial layers. Heat transfer is considered in radial direction only, the thermodynamical coupling of the different layers is obtained by the coolant flow. The thermal material parameters are considered to be temperature independent. Reactivity feedback is introduced by means of reactivity coefficients for fuel, canning, and coolant. Doppler broadening is included. The first cooling cycle can be taken into account by a simple model. 2 - Method of solution: The integration of the point kinetics equations is done numerically by the P11 scheme. The system of temperature equations with constant heat resistance coefficients is solved by the method of factorization. 3 - Restrictions on the complexity of the problem: Given limits are: 10 radial fuel zones, 25 axial layers, 6 groups of delayed neutrons
Two-point functions in a holographic Kondo model
Energy Technology Data Exchange (ETDEWEB)
Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)
2017-03-07
We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.
Modeling a TRIGA Mark II reactor using the Attila three-dimensional deterministic transport code
International Nuclear Information System (INIS)
Keller, S.T.; Palmer, T.S.; Wareing, T.A.
2005-01-01
A benchmark model of a TRIGA reactor constructed using materials and dimensions similar to existing TRIGA reactors was analyzed using MCNP and the recently developed deterministic transport code Attila TM . The benchmark reactor requires no MCNP modeling approximations, yet is sufficiently complex to validate the new modeling techniques. Geometric properties of the benchmark reactor are specified for use by Attila TM with CAD software. Materials are treated individually in MCNP. Materials used in Attila TM that are clad are homogenized. Attila TM uses multigroup energy discretization. Two cross section libraries were constructed for comparison. A 16 group library collapsed from the SCALE 4.4.a 238 group library provided better results than a seven group library calculated with WIMS-ANL. Values of the k-effective eigenvalue and scalar flux as a function of location and energy were calculated by the two codes. The calculated values for k-effective and spatially averaged neutron flux were found to be in good agreement. Flux distribution by space and energy also agreed well. Attila TM results could be improved with increased spatial and angular resolution and revised energy group structure. (authors)
Cheung, King Sing
2014-01-01
Petri nets are a formal and theoretically rich model for the modelling and analysis of systems. A subclass of Petri nets, augmented marked graphs possess a structure that is especially desirable for the modelling and analysis of systems with concurrent processes and shared resources.This monograph consists of three parts: Part I provides the conceptual background for readers who have no prior knowledge on Petri nets; Part II elaborates the theory of augmented marked graphs; finally, Part III discusses the application to system integration. The book is suitable as a first self-contained volume
A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.
Flat Knitting Loop Deformation Simulation Based on Interlacing Point Model
Directory of Open Access Journals (Sweden)
Jiang Gaoming
2017-12-01
Full Text Available In order to create realistic loop primitives suitable for the faster CAD of the flat-knitted fabric, we have performed research on the model of the loop as well as the variation of the loop surface. This paper proposes an interlacing point-based model for the loop center curve, and uses the cubic Bezier curve to fit the central curve of the regular loop, elongated loop, transfer loop, and irregular deformed loop. In this way, a general model for the central curve of the deformed loop is obtained. The obtained model is then utilized to perform texture mapping, texture interpolation, and brightness processing, simulating a clearly structured and lifelike deformed loop. The computer program LOOP is developed by using the algorithm. The deformed loop is simulated with different yarns, and the deformed loop is applied to design of a cable stitch, demonstrating feasibility of the proposed algorithm. This paper provides a loop primitive simulation method characterized by lifelikeness, yarn material variability, and deformation flexibility, and facilitates the loop-based fast computer-aided design (CAD of the knitted fabric.
Defining the end-point of mastication: A conceptual model.
Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E
2017-10-01
The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these
MIDAS/PK code development using point kinetics model
International Nuclear Information System (INIS)
Song, Y. M.; Park, S. H.
1999-01-01
In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation
Modeling elephant-mediated cascading effects of water point closure.
Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F
2015-03-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically
Forecasting Macedonian Business Cycle Turning Points Using Qual Var Model
Directory of Open Access Journals (Sweden)
Petrovska Magdalena
2016-09-01
Full Text Available This paper aims at assessing the usefulness of leading indicators in business cycle research and forecast. Initially we test the predictive power of the economic sentiment indicator (ESI within a static probit model as a leading indicator, commonly perceived to be able to provide a reliable summary of the current economic conditions. We further proceed analyzing how well an extended set of indicators performs in forecasting turning points of the Macedonian business cycle by employing the Qual VAR approach of Dueker (2005. In continuation, we evaluate the quality of the selected indicators in pseudo-out-of-sample context. The results show that the use of survey-based indicators as a complement to macroeconomic data work satisfactory well in capturing the business cycle developments in Macedonia.
Impact of selected troposphere models on Precise Point Positioning convergence
Kalita, Jakub; Rzepecka, Zofia
2016-04-01
The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first
Atmospheric mercury dispersion modelling from two nearest hypothetical point sources
Energy Technology Data Exchange (ETDEWEB)
Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)
2012-07-01
The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric
Two-point boundary correlation functions of dense loop models
Directory of Open Access Journals (Sweden)
Alexi Morin-Duchesne, Jesper Lykke Jacobsen
2018-06-01
Full Text Available We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a and a pair of defects (b in a Dirichlet boundary condition, the transition (c between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d, loops (e and boundary segments (f in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\\beta = 0$, we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\\Delta = -\\frac18$, $0$, $-\\frac3{32}$, $\\frac38$, $1$, $\\tfrac \\theta \\pi (1+\\tfrac{2\\theta}\\pi$, where $\\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\\log|x-y|$ dependence from the asymptotics, and a $\\ln (\\ln n$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\\beta = 2 \\cos \\lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\\Delta = \\Delta_{1,2}$, $\\Delta_{1,3}$, $\\Delta_{0,\\frac12}$, $\\Delta_{1,0}$, $\\Delta_{1,-1}$ and $\\Delta_{\\frac{2\\theta}\\lambda+1,\\frac{2\\theta}\\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur.
International Nuclear Information System (INIS)
Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz
2008-01-01
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory
Multiscale Modeling of Point and Line Defects in Cubic Lattices
National Research Council Canada - National Science Library
Chung, P. W; Clayton, J. D
2007-01-01
.... This multiscale theory explicitly captures heterogeneity in microscopic atomic motion in crystalline materials, attributed, for example, to the presence of various point and line lattice defects...
Polevoi, Steven K; Jewel Shim, J; McCulloch, Charles E; Grimes, Barbara; Govindarajan, Prasanthi
2013-04-01
Patients with psychiatric emergencies often spend excessive time in an emergency department (ED) due to limited inpatient psychiatric bed capacity. The objective was to compare traditional resident consultation with a new model (comanagement) to reduce length of stay (LOS) for patients with psychiatric emergencies. The costs of this model were compared to those of standard care. This was a before-and-after study conducted in the ED of an urban academic medical center without an inpatient psychiatry unit from January 1, 2007, through December 31, 2009. Subjects were all adult patients seen by ED clinicians and determined to be a danger to self or others or gravely disabled. At baseline, psychiatry residents evaluated patients and made therapeutic recommendations after consultation with faculty. The comanagement model was fully implemented in September 2008. In this model, psychiatrists directly ordered pharmacotherapy, regularly monitored effects, and intensified efforts toward appropriate disposition. Additionally, increased attending-level involvement expedited focused evaluation and disposition of patients. An interrupted time series analysis was used to study the effects of this intervention on LOS for all psychiatric patients transferred for inpatient psychiatric care. Secondary outcomes included mean number of hours on ambulance diversion per month and the mean number of patients who left without being seen (LWBS) from the ED. A total of 1,884 patient visits were considered. Compared to the preintervention phase, median LOS for patients transferred for inpatient psychiatric care decreased by about 22% (p model was associated with a marked reduction in the LOS for this patient population. © 2013 by the Society for Academic Emergency Medicine.
Statistical methods for the forensic analysis of striated tool marks
Energy Technology Data Exchange (ETDEWEB)
Hoeksema, Amy Beth [Iowa State Univ., Ames, IA (United States)
2013-01-01
In forensics, fingerprints can be used to uniquely identify suspects in a crime. Similarly, a tool mark left at a crime scene can be used to identify the tool that was used. However, the current practice of identifying matching tool marks involves visual inspection of marks by forensic experts which can be a very subjective process. As a result, declared matches are often successfully challenged in court, so law enforcement agencies are particularly interested in encouraging research in more objective approaches. Our analysis is based on comparisons of profilometry data, essentially depth contours of a tool mark surface taken along a linear path. In current practice, for stronger support of a match or non-match, multiple marks are made in the lab under the same conditions by the suspect tool. We propose the use of a likelihood ratio test to analyze the difference between a sample of comparisons of lab tool marks to a field tool mark, against a sample of comparisons of two lab tool marks. Chumbley et al. (2010) point out that the angle of incidence between the tool and the marked surface can have a substantial impact on the tool mark and on the effectiveness of both manual and algorithmic matching procedures. To better address this problem, we describe how the analysis can be enhanced to model the effect of tool angle and allow for angle estimation for a tool mark left at a crime scene. With sufficient development, such methods may lead to more defensible forensic analyses.
Directory of Open Access Journals (Sweden)
Katy Shaw
2016-04-01
Full Text Available Mark Watson is a British comedian and novelist. His five novels to date – 'Bullet Points' (2004, 'A Light-Hearted Look At Murder' (2007, 'Eleven' (2010, 'The Knot' (2012 and 'Hotel Alpha' (2014 – explore human relationships and communities in contemporary society. His latest novel Hotel Alpha tells the story of an extraordinary hotel in London and two mysterious disappearances that raise questions no one seems willing to answer. External to the novel, readers can also discover more about the hotel and its inhabitants in one hundred extra stories that expand the world of the novel and can be found at http://www.hotelalphastories.com. In conversation here with Dr Katy Shaw, Mark offers some reflections on his writing process, the field of contemporary literature, and the vitality of the novel form in the twenty-first century.
HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL
The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...
Temperature distribution model for the semiconductor dew point detector
Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.
2001-08-01
The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.
A New Blind Pointing Model Improves Large Reflector Antennas Precision Pointing at Ka-Band (32 GHz)
Rochblatt, David J.
2009-01-01
The National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL)-Deep Space Network (DSN) subnet of 34-m Beam Waveguide (BWG) Antennas was recently upgraded with Ka-Band (32-GHz) frequency feeds for space research and communication. For normal telemetry tracking a Ka-Band monopulse system is used, which typically yields 1.6-mdeg mean radial error (MRE) pointing accuracy on the 34-m diameter antennas. However, for the monopulse to be able to acquire and lock, for special radio science applications where monopulse cannot be used, or as a back-up for the monopulse, high-precision open-loop blind pointing is required. This paper describes a new 4th order pointing model and calibration technique, which was developed and applied to the DSN 34-m BWG antennas yielding 1.8 to 3.0-mdeg MRE pointing accuracy and amplitude stability of 0.2 dB, at Ka-Band, and successfully used for the CASSINI spacecraft occultation experiment at Saturn and Titan. In addition, the new 4th order pointing model was used during a telemetry experiment at Ka-Band (32 GHz) utilizing the Mars Reconnaissance Orbiter (MRO) spacecraft while at a distance of 0.225 astronomical units (AU) from Earth and communicating with a DSN 34-m BWG antenna at a record high rate of 6-megabits per second (Mb/s).
Schwämmle, Veit; Jensen, Ole Nørregaard
2013-01-01
Chromatin is a highly compact and dynamic nuclear structure that consists of DNA and associated proteins. The main organizational unit is the nucleosome, which consists of a histone octamer with DNA wrapped around it. Histone proteins are implicated in the regulation of eukaryote genes and they carry numerous reversible post-translational modifications that control DNA-protein interactions and the recruitment of chromatin binding proteins. Heterochromatin, the transcriptionally inactive part of the genome, is densely packed and contains histone H3 that is methylated at Lys 9 (H3K9me). The propagation of H3K9me in nucleosomes along the DNA in chromatin is antagonizing by methylation of H3 Lysine 4 (H3K4me) and acetylations of several lysines, which is related to euchromatin and active genes. We show that the related histone modifications form antagonized domains on a coarse scale. These histone marks are assumed to be initiated within distinct nucleation sites in the DNA and to propagate bi-directionally. We propose a simple computer model that simulates the distribution of heterochromatin in human chromosomes. The simulations are in agreement with previously reported experimental observations from two different human cell lines. We reproduced different types of barriers between heterochromatin and euchromatin providing a unified model for their function. The effect of changes in the nucleation site distribution and of propagation rates were studied. The former occurs mainly with the aim of (de-)activation of single genes or gene groups and the latter has the power of controlling the transcriptional programs of entire chromosomes. Generally, the regulatory program of gene transcription is controlled by the distribution of nucleation sites along the DNA string.
Khieu, Trang Q T; Pierse, Nevil; Telfar-Barnard, Lucy Frances; Zhang, Jane; Huang, Q Sue; Baker, Michael G
2017-09-01
Influenza is responsible for a large number of deaths which can only be estimated using modelling methods. Such methods have rarely been applied to describe the major socio-demographic characteristics of this disease burden. We used quasi Poisson regression models with weekly counts of deaths and isolates of influenza A, B and respiratory syncytial virus for the period 1994 to 2008. The estimated average mortality rate was 13.5 per 100,000 people which was 1.8% of all deaths in New Zealand. Influenza mortality differed markedly by age, sex, ethnicity and socioeconomic position. Relatively vulnerable groups were males aged 65-79 years (Rate ratio (RR) = 1.9, 95% CI: 1.9, 1.9 compared with females), Māori (RR = 3.6, 95% CI: 3.6, 3.7 compared with European/Others aged 65-79 years), Pacific (RR = 2.4, 95% CI: 2.4, 2.4 compared with European/Others aged 65-79 years) and those living in the most deprived areas (RR = 1.8, 95% CI: 1.3, 2.4) for New Zealand Deprivation (NZDep) 9&10 (the most deprived) compared with NZDep 1&2 (the least deprived). These results support targeting influenza vaccination and other interventions to the most vulnerable groups, in particular Māori and Pacific people and men aged 65-79 years and those living in the most deprived areas. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.
2013-01-01
Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.
A case study on point process modelling in disease mapping
Czech Academy of Sciences Publication Activity Database
Beneš, Viktor; Bodlák, M.; Moller, J.; Waagepetersen, R.
2005-01-01
Roč. 24, č. 3 (2005), s. 159-168 ISSN 1580-3139 R&D Projects: GA MŠk 0021620839; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z10750506 Keywords : log Gaussian Cox point process * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research
Business models & business cases for point-of-care testing
Staring, A.J.; Meertens, L. O.; Sikkel, N.
2016-01-01
Point-Of-Care Testing (POCT) enables clinical tests at or near the patient, with test results that are available instantly or in a very short time frame, to assist caregivers with immediate diagnosis and/or clinical intervention. The goal of POCT is to provide accurate, reliable, fast, and
Modeling elephant-mediated cascading effects of water point closure
Hilbers, J.P.; Langevelde, van F.; Prins, H.H.T.; Grant, C.C.; Peel, M.; Coughenour, M.B.; Knegt, de H.J.; Slotow, R.; Smit, I.; Kiker, G.A.; Boer, de W.F.
2015-01-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are however alternative ways to control wildlife densities, such as opening or closing water points. The
Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models
Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.
2011-09-01
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
Deng, Xinyi
2016-08-01
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Model uncertainty from a regulatory point of view
International Nuclear Information System (INIS)
Abramson, L.R.
1994-01-01
This paper discusses model uncertainty in the larger context of knowledge and random uncertainty. It explores some regulatory implications of model uncertainty and argues that, from a regulator's perspective, a conservative approach must be taken. As a consequence of this perspective, averaging over model results is ruled out
Alt, J.C.; Shanks, Wayne C.
2003-01-01
The opaque mineralogy and the contents and isotope compositions of sulfur in serpentinized peridotites from the MARK (Mid-Atlantic Ridge, Kane Fracture Zone) area were examined to understand the conditions of serpentinization and evaluate this process as a sink for seawater sulfur. The serpentinites contain a sulfur-rich secondary mineral assemblage and have high sulfur contents (up to 1 wt.%) and elevated ??34Ssulfide (3.7 to 12.7???). Geochemical reaction modeling indicates that seawater-peridotite interaction at 300 to 400??C alone cannot account for both the high sulfur contents and high ??34Ssulfide. These require a multistage reaction with leaching of sulfide from subjacent gabbro during higher temperature (???400??C) reactions with seawater and subsequent deposition of sulfide during serpentinization of peridotite at ???300??C. Serpentinization produces highly reducing conditions and significant amounts of H2 and results in the partial reduction of seawater carbonate to methane. The latter is documented by formation of carbonate veins enriched in 13C (up to 4.5???) at temperatures above 250??C. Although different processes produce variable sulfur isotope effects in other oceanic serpentinites, sulfur is consistently added to abyssal peridotites during serpentinization. Data for serpentinites drilled and dredged from oceanic crust and from ophiolites indicate that oceanic peridotites are a sink for up to 0.4 to 6.0 ?? 1012 g seawater S yr-1. This is comparable to sulfur exchange that occurs in hydrothermal systems in mafic oceanic crust at midocean ridges and on ridge flanks and amounts to 2 to 30% of the riverine sulfate source and sedimentary sulfide sink in the oceans. The high concentrations and modified isotope compositions of sulfur in serpentinites could be important for mantle metasomatism during subduction of crust generated at slow spreading rates. ?? 2003 Elsevier Science Ltd.
Neutral-point voltage dynamic model of three-level NPC inverter for reactive load
DEFF Research Database (Denmark)
Maheshwari, Ram Krishan; Munk-Nielsen, Stig; Busquets-Monge, Sergio
2012-01-01
A three-level neutral-point-clamped inverter needs a controller for the neutral-point voltage. Typically, the controller design is based on a dynamic model. The dynamic model of the neutral-point voltage depends on the pulse width modulation technique used for the inverter. A pulse width modulati...
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Energy Technology Data Exchange (ETDEWEB)
Sivaraman, A.; Kobuyashi, R.; Mayee, J.W.
1984-02-01
Based on Pitzer's three-parameter corresponding states principle, the authors have developed a correlation of the latent heat of vaporization of aromatic coal liquid model compounds for a temperature range from the freezing point to the critical point. An expansion of the form L = L/sub 0/ + ..omega..L /sub 1/ is used for the dimensionless latent heat of vaporization. This model utilizes a nonanalytic functional form based on results derived from renormalization group theory of fluids in the vicinity of the critical point. A simple expression for the latent heat of vaporization L = D/sub 1/epsilon /SUP 0.3333/ + D/sub 2/epsilon /SUP 0.8333/ + D/sub 4/epsilon /SUP 1.2083/ + E/sub 1/epsilon + E/sub 2/epsilon/sup 2/ + E/sub 3/epsilon/sup 3/ is cast in a corresponding states principle correlation for coal liquid compounds. Benzene, the basic constituent of the functional groups of the multi-ring coal liquid compounds, is used as the reference compound in the present correlation. This model works very well at both low and high reduced temperatures approaching the critical point (0.02 < epsilon = (T /SUB c/ - T)/(T /SUB c/- 0.69)). About 16 compounds, including single, two, and three-ring compounds, have been tested and the percent root-mean-square deviations in latent heat of vaporization reported and estimated through the model are 0.42 to 5.27%. Tables of the coefficients of L/sub 0/ and L/sub 1/ are presented. The contributing terms of the latent heat of vaporization function are also presented in a table for small increments of epsilon.
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
AUTOMATED CALIBRATION OF FEM MODELS USING LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
B. Riveiro
2018-05-01
Full Text Available In present work it is pretended to estimate elastic parameters of beams through the combined use of precision geomatic techniques (laser scanning and structural behaviour simulation tools. The study has two aims, on the one hand, to develop an algorithm able to interpret automatically point clouds acquired by laser scanning systems of beams subjected to different load situations on experimental tests; and on the other hand, to minimize differences between deformation values given by simulation tools and those measured by laser scanning. In this way we will proceed to identify elastic parameters and boundary conditions of structural element so that surface stresses can be estimated more easily.
Langtimm, Catherine A.; Kendall, William L.; Beck, Cathy A.; Kochman, Howard I.; Teague, Amy L.; Meigs-Friend, Gaia; Peñaloza, Claudia L.
2016-11-30
This report provides supporting details and evidence for the rationale, validity and efficacy of a new mark-recapture model, the Barker Robust Design, to estimate regional manatee survival rates used to parameterize several components of the 2012 version of the Manatee Core Biological Model (CBM) and Threats Analysis (TA). The CBM and TA provide scientific analyses on population viability of the Florida manatee subspecies (Trichechus manatus latirostris) for U.S. Fish and Wildlife Service’s 5-year reviews of the status of the species as listed under the Endangered Species Act. The model evaluation is presented in a standardized reporting framework, modified from the TRACE (TRAnsparent and Comprehensive model Evaluation) protocol first introduced for environmental threat analyses. We identify this new protocol as TRACE-MANATEE SURVIVAL and this model evaluation specifically as TRACE-MANATEE SURVIVAL, Barker RD version 1. The longer-term objectives of the manatee standard reporting format are to (1) communicate to resource managers consistent evaluation information over sequential modeling efforts; (2) build understanding and expertise on the structure and function of the models; (3) document changes in model structures and applications in response to evolving management objectives, new biological and ecological knowledge, and new statistical advances; and (4) provide greater transparency for management and research review.
Time Series ARIMA Models of Undergraduate Grade Point Average.
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Sand Point, Alaska MHW Coastal Digital Elevation Model
National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...
International Nuclear Information System (INIS)
McCauley, E.W.; Lai, W.
1977-02-01
A study was conducted to characterize the mechanisms which give rise to observed oscillations in the vertical load function (VLF) of bench top pool dynamics tests. This is part of a continuing investigation at the Lawrence Livermore Laboratory of the General Electric Mark I Nuclear Reactor pressure suppression system
Lyapunov functions for the fixed points of the Lorenz model
International Nuclear Information System (INIS)
Bakasov, A.A.; Govorkov, B.B. Jr.
1992-11-01
We have shown how the explicit Lyapunov functions can be constructed in the framework of a regular procedure suggested and completed by Lyapunov a century ago (''method of critical cases''). The method completely covers all practically encountering subtle cases of stability study for ordinary differential equations when the linear stability analysis fails. These subtle cases, ''the critical cases'', according to Lyapunov, include both bifurcations of solutions and solutions of systems with symmetry. Being properly specialized and actually powerful in case of ODE's, this Lyapunov's method is formulated in simple language and should attract a wide interest of the physical audience. The method leads to inevitable construction of the explicit Lyapunov function, takes automatically into account the Fredholm alternative and avoids infinite step calculations. Easy and apparent physical interpretation of the Lyapunov function as a potential or as a time-dependent entropy provides one with more details about the local dynamics of the system at non-equilibrium phase transition points. Another advantage is that this Lyapunov's method consists of a set of very detailed explicit prescriptions which allow one to easy programmize the method for a symbolic processor. The application of the Lyapunov theory for critical cases has been done in this work to the real Lorenz equations and it is shown, in particular, that increasing σ at the Hopf bifurcation point suppresses the contribution of one of the variables to the destabilization of the system. The relation of the method to contemporary methods and its place among them have been clearly and extensively discussed. Due to Appendices, the paper is self-contained and does not require from a reader to approach results published only in Russian. (author). 38 refs
Zimnicki, Katherine M
2015-01-01
Preoperative teaching and stoma marking are supported by research and included in clinical practice guidelines from the WOCN Society and others. Using a FOCUS-Plan-Do-Check-Act model of Total Quality Management, a multidisciplinary team was formed that developed a flow chart outlining the process of care for patients undergoing planned ostomy surgery that included an educational intervention that enabled staff nurses to perform preoperative stoma site marking and education. After 3 months, we found a statistically significant increase in the number of surgical patients who received these evidence-based interventions (14% vs 64%; χ = 9.32; P = .002).
Energy Technology Data Exchange (ETDEWEB)
Galicia A, J.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Aguilar H, F., E-mail: blink19871@hotmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2015-09-15
The main purpose of this paper is to obtain a model of the reactor core TRIGA Mark III that accurately represents the real operating conditions to 1 M Wth, using the Monte Carlo code MCNP5. To provide a more detailed analysis, different models of the reactor core were realized by simulating the control rods extracted and inserted in conditions in cold (293 K) also including an analysis for shutdown margin, so that satisfied the Operation Technical Specifications. The position they must have the control rods to reach a power equal to 1 M Wth, were obtained from practice entitled Operation in Manual Mode performed at Instituto Nacional de Investigaciones Nucleares (ININ). Later, the behavior of the K{sub eff} was analyzed considering different temperatures in the fuel elements, achieving calculate subsequently the values that best represent the actual reactor operation. Finally, the calculations in the developed model for to obtain the distribution of average flow of thermal, epithermal and fast neutrons in the six new experimental facilities are presented. (Author)
Development and evaluation of spatial point process models for epidermal nerve fibers.
Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila
2013-06-01
We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.
RANS-VOF modelling of the Wavestar point absorber
DEFF Research Database (Denmark)
Ransley, E. J.; Greaves, D. M.; Raby, A.
2017-01-01
Highlights •A fully nonlinear, coupled model of the Wavestar WEC has been created using open-source CFD software, OpenFOAM®. •The response of the Wavestar WEC is simulated in regular waves with different steepness. •Predictions of body motion, surface elevation, fluid velocity, pressure and load ...
Demystifying the cytokine network: Mathematical models point the way.
Morel, Penelope A; Lee, Robin E C; Faeder, James R
2017-10-01
Cytokines provide the means by which immune cells communicate with each other and with parenchymal cells. There are over one hundred cytokines and many exist in families that share receptor components and signal transduction pathways, creating complex networks. Reductionist approaches to understanding the role of specific cytokines, through the use of gene-targeted mice, have revealed further complexity in the form of redundancy and pleiotropy in cytokine function. Creating an understanding of the complex interactions between cytokines and their target cells is challenging experimentally. Mathematical and computational modeling provides a robust set of tools by which complex interactions between cytokines can be studied and analyzed, in the process creating novel insights that can be further tested experimentally. This review will discuss and provide examples of the different modeling approaches that have been used to increase our understanding of cytokine networks. This includes discussion of knowledge-based and data-driven modeling approaches and the recent advance in single-cell analysis. The use of modeling to optimize cytokine-based therapies will also be discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Numerical Modeling of a Wave Energy Point Absorber
DEFF Research Database (Denmark)
Hernandez, Lorenzo Banos; Frigaard, Peter; Kirkegaard, Poul Henning
2009-01-01
The present study deals with numerical modelling of the Wave Star Energy WSE device. Hereby, linear potential theory is applied via a BEM code on the wave hydrodynamics exciting the floaters. Time and frequency domain solutions of the floater response are determined for regular and irregular seas....... Furthermore, these results are used to estimate the power and the energy absorbed by a single oscillating floater. Finally, a latching control strategy is analysed in open-loop configuration for energy maximization....
Numerical schemes for one-point closure turbulence models
International Nuclear Information System (INIS)
Larcher, Aurelien
2010-01-01
First-order Reynolds Averaged Navier-Stokes (RANS) turbulence models are studied in this thesis. These latter consist of the Navier-Stokes equations, supplemented with a system of balance equations describing the evolution of characteristic scalar quantities called 'turbulent scales'. In so doing, the contribution of the turbulent agitation to the momentum can be determined by adding a diffusive coefficient (called 'turbulent viscosity') in the Navier-Stokes equations, such that it is defined as a function of the turbulent scales. The numerical analysis problems, which are studied in this dissertation, are treated in the frame of a fractional step algorithm, consisting of an approximation on regular meshes of the Navier-Stokes equations by the nonconforming Crouzeix-Raviart finite elements, and a set of scalar convection-diffusion balance equations discretized by the standard finite volume method. A monotone numerical scheme based on the standard finite volume method is proposed so as to ensure that the turbulent scales, like the turbulent kinetic energy (k) and its dissipation rate (ε), remain positive in the case of the standard k - ε model, as well as the k - ε RNG and the extended k - ε - ν 2 models. The convergence of the proposed numerical scheme is then studied on a system composed of the incompressible Stokes equations and a steady convection-diffusion equation, which are both coupled by the viscosities and the turbulent production term. This reduced model allows to deal with the main difficulty encountered in the analysis of such problems: the definition of the turbulent production term leads to consider a class of convection-diffusion problems with an irregular right-hand side belonging to L 1 . Finally, to step towards the unsteady problem, the convergence of the finite volume scheme for a model convection-diffusion equation with L 1 data is proved. The a priori estimates on the solution and on its time derivative are obtained in discrete norms, for
Limbong, Maraden
2010-01-01
Salah satu proses pada pembuatan cetakan sarung tangan di PT. Mark Dynamics Indonesia yaitu proses pencucian yang dilakukan di departemen Washing. Proses pencucian ini lini produksinya di bagi atas 3 lini dimana jenis ukuran yang harus dikerjakan terdiri dari 3 jenis yaitu small, medium dan large dimana waktu pengerjaannya berbeda-beda. Pada proses ini, tiap lini terdiri dari 20 orang dengan waktu kerja normal selama 400 menit (6,67 jam) per hari dengan persentase input ukuran yang masuk dal...
Geographical point cloud modelling with the 3D medial axis transform
Peters, R.Y.
2018-01-01
A geographical point cloud is a detailed three-dimensional representation of the geometry of our geographic environment.
Using geographical point cloud modelling, we are able to extract valuable information from geographical point clouds that can be used for applications in asset management,
A Massless-Point-Charge Model for the Electron
Directory of Open Access Journals (Sweden)
Daywitt W. C.
2010-04-01
Full Text Available “It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foun- dations. Maxwell’s theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrody- namics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle” [1, p.367]. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff [2] with the theory of the Planck vacuum (PV [3], the basic idea for the model following from [2] with the PV theory adding some important details.
A Massless-Point-Charge Model for the Electron
Directory of Open Access Journals (Sweden)
Daywitt W. C.
2010-04-01
Full Text Available "It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foundations. Maxwell's theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrodynamics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle" (Grandy W.T. Jr. Relativistic quantum mechanics of leptons and fields. Kluwer Academic Publishers, Dordrecht-London, 1991, p.367. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff with the theory of the Planck vacuum (PV, the basic idea for the model following from Puthoff with the PV theory adding some important details.
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...
Improved point-kinetics model for the BWR control rod drop accident
International Nuclear Information System (INIS)
Neogy, P.; Wakabayashi, T.; Carew, J.F.
1985-01-01
A simple prescription to account for spatial feedback weighting effects in RDA (rod drop accident) point-kinetics analyses has been derived and tested. The point-kinetics feedback model is linear in the core peaking factor, F/sub Q/, and in the core average void fraction and fuel temperature. Comparison with detailed spatial kinetics analyses indicates that the improved point-kinetics model provides an accurate description of the BWR RDA
A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD
Directory of Open Access Journals (Sweden)
Z. Zhang
2016-06-01
Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model
International Nuclear Information System (INIS)
Henriques, W.D.; Dixon, K.R.
1994-01-01
Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety
Modeling of Maximum Power Point Tracking Controller for Solar Power System
Directory of Open Access Journals (Sweden)
Aryuanto Soetedjo
2012-09-01
Full Text Available In this paper, a Maximum Power Point Tracking (MPPT controller for solar power system is modeled using MATLAB Simulink. The model consists of PV module, buck converter, and MPPT controller. The contribution of the work is in the modeling of buck converter that allowing the input voltage of the converter, i.e. output voltage of PV is changed by varying the duty cycle, so that the maximum power point could be tracked when the environmental changes. The simulation results show that the developed model performs well in tracking the maximum power point (MPP of the PV module using Perturb and Observe (P&O Algorithm.
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....
DEFF Research Database (Denmark)
Elsmore, Matthew James
2013-01-01
-border setting, with a particular focus on small business and consumers. The article's overall message is to call for a rethink of received wisdom suggesting that trade marks are effective trade-enabling devices. The case is made for reassessing how we think about European trade mark law.......First, this article argues that trade mark law should be approached in a supplementary way, called reconfiguration. Second, the article investigates such a reconfiguration of trade mark law by exploring the interplay of trade marks and service transactions in the Single Market, in the cross...
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
Finding Non-Zero Stable Fixed Points of the Weighted Kuramoto model is NP-hard
Taylor, Richard
2015-01-01
The Kuramoto model when considered over the full space of phase angles [$0,2\\pi$) can have multiple stable fixed points which form basins of attraction in the solution space. In this paper we illustrate the fundamentally complex relationship between the network topology and the solution space by showing that determining the possibility of multiple stable fixed points from the network topology is NP-hard for the weighted Kuramoto Model. In the case of the unweighted model this problem is shown...
A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis
Masataka, SUZUKI; Yoshihiko, YAMAZAKI; Yumiko, TANIGUCHI; Department of Psychology, Kinjo Gakuin University; Department of Health and Physical Education, Nagoya Institute of Technology; College of Human Life and Environment, Kinjo Gakuin University
2003-01-01
SUZUKI,M., YAMAZAKI,Y. and TANIGUCHI,Y., A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis. Adv. Exerc. Sports Physiol., Vol.9, No.1 pp.7-25, 2003. According to the equilibrium point hypothesis of motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction among moving equilibrium point, reflex feedback and muscle mechanical properties. This approach is attractive as it obviates the n...
On the asymptotic ergodic capacity of FSO links with generalized pointing error model
Al-Quwaiee, Hessa
2015-09-11
Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.
Point kinetics model with one-dimensional (radial) heat conduction formalism
International Nuclear Information System (INIS)
Jain, V.K.
1989-01-01
A point-kinetics model with one-dimensional (radial) heat conduction formalism has been developed. The heat conduction formalism is based on corner-mesh finite difference method. To get average temperatures in various conducting regions, a novel weighting scheme has been devised. The heat conduction model has been incorporated in the point-kinetics code MRTF-FUEL. The point-kinetics equations are solved using the method of real integrating factors. It has been shown by analysing the simulation of hypothetical loss of regulation accident in NAPP reactor that the model is superior to the conventional one in accuracy and speed of computation. (author). 3 refs., 3 tabs
Prediction model for initial point of net vapor generation for low-flow boiling
International Nuclear Information System (INIS)
Sun Qi; Zhao Hua; Yang Ruichang
2003-01-01
The prediction of the initial point of net vapor generation is significant for the calculation of phase distribution in sub-cooled boiling. However, most of the investigations were developed in high-flow boiling, and there is no common model that could be successfully applied for the low-flow boiling. A predictive model for the initial point of net vapor generation for low-flow forced convection and natural circulation is established here, by the analysis of evaporation and condensation heat transfer. The comparison between experimental data and calculated results shows that this model can predict the net vapor generation point successfully in low-flow sub-cooled boiling
Point-Structured Human Body Modeling Based on 3D Scan Data
Directory of Open Access Journals (Sweden)
Ming-June Tsai
2018-01-01
Full Text Available A novel point-structured geometrical modelling for realistic human body is introduced in this paper. This technique is based on the feature extraction from the 3D body scan data. Anatomic feature such as the neck, the arm pits, the crotch points, and other major feature points are recognized. The body data is then segmented into 6 major parts. A body model is then constructed by re-sampling the scanned data to create a point-structured mesh. The body model contains body geodetic landmarks in latitudinal and longitudinal curves passing through those feature points. The body model preserves the perfect body shape and all the body dimensions but requires little space. Therefore, the body model can be used as a mannequin in garment industry, or as a manikin in various human factor designs, but the most important application is to use as a virtue character to animate the body motion in mocap (motion capture systems. By adding suitable joint freedoms between the segmented body links, kinematic and dynamic properties of the motion theories can be applied to the body model. As a result, a 3D virtual character that is fully resembled the original scanned individual is vividly animating the body motions. The gaps between the body segments due to motion can be filled up by skin blending technique using the characteristic of the point-structured model. The model has the potential to serve as a standardized datatype to archive body information for all custom-made products.
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups
Casas, Lluís; Estop, Euge`nia
2015-01-01
Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…
Mark Tompkins Canaccord
2018-01-01
Mark Tompkins Canaccord is a senior technologist for ecosystem and water resources management in SEC SAID Oakland, California office. In his career which lasts over fifteen years Mark has worked on project involving lake restorations, clean water engineering, ecological engineering and management, hydrology, hydraulics, sediment transport and other projects for environmental planning all over the country. Mark Tompkins Canaccord tries to blend his skills of planning and engineering with s...
Statistical imitation system using relational interest points and Gaussian mixture models
CSIR Research Space (South Africa)
Claassens, J
2009-11-01
Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...
On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model
Al-Quwaiee, Hessa
2016-06-28
Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying
2014-07-15
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali
2014-01-01
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.
Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike
2009-12-04
We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.
One loop beta functions and fixed points in higher derivative sigma models
International Nuclear Information System (INIS)
Percacci, Roberto; Zanusso, Omar
2010-01-01
We calculate the one loop beta functions of nonlinear sigma models in four dimensions containing general two- and four-derivative terms. In the O(N) model there are four such terms and nontrivial fixed points exist for all N≥4. In the chiral SU(N) models there are in general six couplings, but only five for N=3 and four for N=2; we find fixed points only for N=2, 3. In the approximation considered, the four-derivative couplings are asymptotically free but the coupling in the two-derivative term has a nonzero limit. These results support the hypothesis that certain sigma models may be asymptotically safe.
Some application of the model of partition points on a one-dimensional lattice
International Nuclear Information System (INIS)
Mejdani, R.
1991-07-01
We have shown that by using a model of the gas of partition points on one-dimensional lattice, we can find some results about the enzyme kinetics or the average domain-size, which we have obtained before by using a correlated Walks' theory or a probabilistic (combinatoric) way. We have discussed also the problem related with the spread of an infection of disease and the stochastic model of partition points. We think that this model, as a very simple model and mathematically transparent, can be advantageous for other theoretical investigations in chemistry or modern biology. (author). 14 refs, 6 figs, 1 tab
Benchmark models, planes lines and points for future SUSY searches at the LHC
International Nuclear Information System (INIS)
AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.
2012-03-01
We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.
Benchmark models, planes lines and points for future SUSY searches at the LHC
Energy Technology Data Exchange (ETDEWEB)
AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)
2012-03-15
We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.
Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC
AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G
2011-01-01
We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.
Energy Technology Data Exchange (ETDEWEB)
Mocko, Michael Jeffrey [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zavorka, Lukas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Koehler, Paul E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-13
This is a review of Mark-IV target neutronics design. It involved the major redesign of the upper tier, offering harder neutron spectra for upper-tier FPs; a redesign of the high-resolution (HR) moderator; and a preservation of the rest of Mark-III features.
Stock Mark Stock Scientific Visualization Specialist Mark.Stock@nrel.gov | 303-275-4174 Dr. Stock , virtual reality, parallel computing, and manipulation of large spatial data sets. As an artist, he creates . Stock built the SUNLIGHT artwork that is installed on the Webb Building in downtown Denver. In addition
Robust non-rigid point set registration using student's-t mixture model.
Directory of Open Access Journals (Sweden)
Zhiyong Zhou
Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
Wright, David; Thyer, Mark; Westra, Seth
2015-04-01
Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
2010-01-01
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....
Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models
International Nuclear Information System (INIS)
Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander
2014-01-01
In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient
Sabanskis, A.; Virbulis, J.
2018-05-01
Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.
A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems
DEFF Research Database (Denmark)
Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John
2017-01-01
model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...
Markov Random Field Restoration of Point Correspondences for Active Shape Modelling
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus
2004-01-01
In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...
Point vortex modelling of the wake dynamics behind asymmetric vortex generator arrays
Baldacchino, D.; Simao Ferreira, C.; Ragni, D.; van Bussel, G.J.W.
2016-01-01
In this work, we present a simple inviscid point vortex model to study the dynamics of asymmetric vortex rows, as might appear behind misaligned vortex generator vanes. Starting from the existing solution of the in_nite vortex cascade, a numerical model of four base-vortices is chosen to represent
An application of a discrete fixed point theorem to the Cournot model
Sato, Junichi
2008-01-01
In this paper, we apply a discrete fixed point theorem of [7] to the Cournot model [1]. Then we can deal with the Cournot model where the production of the enterprises is discrete. To handle it, we define a discrete Cournot-Nash equilibrium, and prove its existence.
Three-dimensional point-cloud room model in room acoustics simulations
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...
Three-dimensional point-cloud room model for room acoustics simulations
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...
A travel time forecasting model based on change-point detection method
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks
Kyo, Koki
Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.
Model for a Ferromagnetic Quantum Critical Point in a 1D Kondo Lattice
Komijani, Yashar; Coleman, Piers
2018-04-01
Motivated by recent experiments, we study a quasi-one-dimensional model of a Kondo lattice with ferromagnetic coupling between the spins. Using bosonization and dynamical large-N techniques, we establish the presence of a Fermi liquid and a magnetic phase separated by a local quantum critical point, governed by the Kondo breakdown picture. Thermodynamic properties are studied and a gapless charged mode at the quantum critical point is highlighted.
Sigma models in the presence of dynamical point-like defects
International Nuclear Information System (INIS)
Doikou, Anastasia; Karaiskos, Nikos
2013-01-01
Point-like Liouville integrable dynamical defects are introduced in the context of the Landau–Lifshitz and Principal Chiral (Faddeev–Reshetikhin) models. Based primarily on the underlying quadratic algebra we identify the first local integrals of motion, the associated Lax pairs as well as the relevant sewing conditions around the defect point. The involution of the integrals of motion is shown taking into account the sewing conditions.
An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians
International Nuclear Information System (INIS)
Hughes, Ciaran; Mehta, Dhagash; Wales, David J.
2014-01-01
Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems
Elastic-plastic adhesive contact of rough surfaces using n-point asperity model
International Nuclear Information System (INIS)
Sahoo, Prasanta; Mitra, Anirban; Saha, Kashinath
2009-01-01
This study considers an analysis of the elastic-plastic contact of rough surfaces in the presence of adhesion using an n-point asperity model. The multiple-point asperity model, developed by Hariri et al (2006 Trans ASME: J. Tribol. 128 505-14) is integrated into the elastic-plastic adhesive contact model developed by Roy Chowdhury and Ghosh (1994 Wear 174 9-19). This n-point asperity model differs from the conventional Greenwood and Williamson model (1966 Proc. R. Soc. Lond. A 295 300-19) in considering the asperities not as fixed entities but as those that change through the contact process, and hence it represents the asperities in a more realistic manner. The newly defined adhesion index and plasticity index defined for the n-point asperity model are used to consider the different conditions that arise because of varying load, surface and material parameters. A comparison between the load-separation behaviour of the new model and the conventional one shows a significant difference between the two depending on combinations of mean separation, adhesion index and plasticity index.
A Labeling Model Based on the Region of Movability for Point-Feature Label Placement
Directory of Open Access Journals (Sweden)
Lin Li
2016-09-01
Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.
Federal Laboratory Consortium — The Mark I Test Facility is a state-of-the-art space environment simulation test chamber for full-scale space systems testing. A $1.5M dollar upgrade in fiscal year...
Mark Raidpere portreefotod Kielis
1999-01-01
Kieli Linnagaleriis avatud 2. Ars Baltica fototriennaalil 'Can You Hear Me?' esindab Eestit Mark Raidpere seeriaga 'Portreed 1998'. Näituse Eesti-poolne kuraator Anu Liivak, kataloogiteksti kirjutas Anders Härm. Tuntumaid osalejaid triennaalil Wolfgang Tillmans
Yoshikawa, M; Yoshinaga, K; Imamura, Y; Hayashi, T; Osako, T; Takahashi, K; Kaneko, M; Fujisawa, M; Kamidono, S
2016-09-01
The organ donation rate in Japan is much lower than that in other developed countries for several reasons. An advanced educational program for in-hospital procurement coordinators is a possible solution for this. We introduced a Transplant Procurement Management (TPM) educational program at Hyogo Prefecture, Japan. Ten healthcare professionals at Hyogo Prefecture participated in the Advanced International TPM course to educate themselves on TPM and held 2 TPM Model Organ Procurement Training Workshops at Hyogo Prefecture for in-hospital procurement coordinators. Furthermore, we held 2 workshops outside Hyogo Prefecture and at the same time undertook a pre-workshop questionnaire survey to evaluate the ability and motivation with respect to organ donation. To evaluate the effectiveness of the workshops, we conducted post-workshop and 3-months-after workshop questionnaire surveys. The results of the pre-workshop survey revealed that in-hospital procurement coordinators lacked the knowledge regarding the entire organ donation process, the current status of organ donation in Japan, and the definition of brain death. Moreover, they did not completely understand the meaning of "organ donation." The results of the post-workshop questionnaire survey showed that the educational program was effective to improve the knowledge and skills of organ donation and motivated behavioral changes among the participants. The survey results showed that our TPM model educational program offered sufficient knowledge and skills to increase organ donation at Hyogo Prefecture. We will continue this program and make an effort to further contribute to the Japanese organ donation activities. Copyright © 2016 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
2015-01-01
Planchebaseret udendørs udstilling på musikfestivalen Copenhell 18-20/6 2015. En mindre udgave af udstillingen Marks of Metal - Logodesign og visualitet i heavy metal. Udarbejdet i samarbejde med Mediemuseet.......Planchebaseret udendørs udstilling på musikfestivalen Copenhell 18-20/6 2015. En mindre udgave af udstillingen Marks of Metal - Logodesign og visualitet i heavy metal. Udarbejdet i samarbejde med Mediemuseet....
International Nuclear Information System (INIS)
Fenneker; Steinmetz; Toebbe
1986-07-01
The report contains the material correlations and models used in the fuel pin design code IAMBUS for the irradiation behavior of PuO 2 -UO 2 fissile materials and UO 2 fertile materials of the SNR-300 Mark-II reload and the KNK II third core. They are applicable for pellet densities of more than 90 % of the theoretical density. The presented models of the fuel behavior and the applied material correlations have been derived either from single experiments or from the comparison of theoretically predicted integral fuel behavior with the results of fuel pin irradiation experiments. The material correlations have been examined and extended in the frame of the collaborations INTERATOM/KWU and INTERATOM/KfK. French and British results were included, when available from the European fast reactor knowledge exchange [de
Groupe de protection des biens
2000-01-01
As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...
International Nuclear Information System (INIS)
Dan, Ho Jin; Lee, Joon Sik
2016-01-01
Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Empiric model for mean generation time adjustment factor for classic point kinetics equations
Energy Technology Data Exchange (ETDEWEB)
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear
2017-11-01
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Empiric model for mean generation time adjustment factor for classic point kinetics equations
International Nuclear Information System (INIS)
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.
2017-01-01
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus
2014-01-01
Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...
Tricritical point in quantum phase transitions of the Coleman–Weinberg model at Higgs mass
International Nuclear Information System (INIS)
Fiolhais, Miguel C.N.; Kleinert, Hagen
2013-01-01
The tricritical point, which separates first and second order phase transitions in three-dimensional superconductors, is studied in the four-dimensional Coleman–Weinberg model, and the similarities as well as the differences with respect to the three-dimensional result are exhibited. The position of the tricritical point in the Coleman–Weinberg model is derived and found to be in agreement with the Thomas–Fermi approximation in the three-dimensional Ginzburg–Landau theory. From this we deduce a special role of the tricritical point for the Standard Model Higgs sector in the scope of the latest experimental results, which suggests the unexpected relevance of tricritical behavior in the electroweak interactions.
Energy Technology Data Exchange (ETDEWEB)
Dan, Ho Jin; Lee, Joon Sik [Seoul National University, Seoul (Korea, Republic of)
2016-03-15
Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation.
Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications
Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.
2018-05-01
We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.
A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points
Directory of Open Access Journals (Sweden)
Giao N. Pham
2018-02-01
Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.
Pseudo-critical point in anomalous phase diagrams of simple plasma models
International Nuclear Information System (INIS)
Chigvintsev, A Yu; Iosilevskiy, I L; Noginova, L Yu
2016-01-01
Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z . Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval ( Z 1 < Z < Z 2 ). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941). (paper)
Pseudo-critical point in anomalous phase diagrams of simple plasma models
Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu
2016-11-01
Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).
Directory of Open Access Journals (Sweden)
Lei Jia
Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
International Nuclear Information System (INIS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-01-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Maillard, Philippe; Gomes, Marília F.
2016-06-01
This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.
Reduction of bias in neutron multiplicity assay using a weighted point model
Energy Technology Data Exchange (ETDEWEB)
Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)
2004-01-01
Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the
DEFF Research Database (Denmark)
Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan
2018-01-01
This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...
International Nuclear Information System (INIS)
Saito, Toki; Nakajima, Yoshikazu; Sugita, Naohiko; Mitsuishi, Mamoru; Hashizume, Hiroyuki; Kuramoto, Kouichi; Nakashima, Yosio
2011-01-01
Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)
Hierarchical model generation for architecture reconstruction using laser-scanned point clouds
Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng
2014-06-01
Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.
Directory of Open Access Journals (Sweden)
Nengcheng Chen
2017-02-01
Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.
Beauchamp, Guy
2008-10-23
This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.
International Nuclear Information System (INIS)
Heinrich, S.
2006-01-01
Nucleus fission process is a very complex phenomenon and, even nowadays, no realistic models describing the overall process are available. The work presented here deals with a theoretical description of fission fragments distributions in mass, charge, energy and deformation. We have reconsidered and updated the B.D. Wilking Scission Point model. Our purpose was to test if this statistic model applied at the scission point and by introducing new results of modern microscopic calculations allows to describe quantitatively the fission fragments distributions. We calculate the surface energy available at the scission point as a function of the fragments deformations. This surface is obtained from a Hartree Fock Bogoliubov microscopic calculation which guarantee a realistic description of the potential dependence on the deformation for each fragment. The statistic balance is described by the level densities of the fragment. We have tried to avoid as much as possible the input of empirical parameters in the model. Our only parameter, the distance between each fragment at the scission point, is discussed by comparison with scission configuration obtained from full dynamical microscopic calculations. Also, the comparison between our results and experimental data is very satisfying and allow us to discuss the success and limitations of our approach. We finally proposed ideas to improve the model, in particular by applying dynamical corrections. (author)
De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan
2016-11-01
A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.
Two- and three-point functions in the D=1 matrix model
International Nuclear Information System (INIS)
Ben-Menahem, S.
1991-01-01
The critical behavior of the genus-zero two-point function in the D=1 matrix model is carefully analyzed for arbitrary embedding-space momentum. Kostov's result is recovered for momenta below a certain value P 0 (which is 1/√α' in the continuum language), with a non-universal form factor which is expressed simply in terms of the critical fermion trajectory. For momenta above P 0 , the Kostov scaling term is found to be subdominant. We then extend the large-N WKB treatment to calculate the genus-zero three-point function, and elucidate its critical behavior when all momenta are below P 0 . The resulting universal scaling behavior, as well as the non-universal form factor for the three-point function, are related to the two-point functions of the individual external momenta, through the factorization familiar from continuum conformal field theories. (orig.)
Ferrimagnetism and compensation points in a decorated 3D Ising model
International Nuclear Information System (INIS)
Oitmaa, J.; Zheng, W.
2003-01-01
Full text: Ferrimagnets are materials where ions on different sublattices have opposing magnetic moments which do not exactly cancel even at zero temperature. An intriguing possibility then is the existence of a compensation point, below the Curie temperature, where the net moment changes sign. This has obvious technological significance. Most theoretical studies of such systems have used mean-field approaches, making it difficult to distinguish real properties of the model from artefacts of the approximation. For this reason a number of simpler models have been proposed, where treatments beyond mean-field theory are possible. Of particular interest are decorated systems, which can be mapped exactly onto simpler models and, in this way, either solved exactly or to a high degree of numerical precision. We use this approach to study a ferrimagnetic Ising system with spins 1/2 at the sites of a simple cubic lattice and spins S=1 or 3/2 located on the bonds. Our results, which are exact to high numerical precision, show a number of surprising and interesting features: for S=1 the possibility of zero, one or two compensation points, re-entrant behaviour, and up to three critical points; for S=3/2 always a simple critical point and zero or one compensation point
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model
Institute of Scientific and Technical Information of China (English)
Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN
2014-01-01
Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.
Room acoustics modeling using a point-cloud representation of the room geometry
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
Room acoustics modeling is usually based on the room geometry that is parametrically described prior to a sound transmission calculation. This is a highly room-specific task and rather time consuming if a complex geometry is to be described. Here, a run time generic method for an arbitrary room...... geometry acquisition is presented. The method exploits a depth sensor of the Kinect device that provides a point based information of a scanned room interior. After post-processing of the Kinect output data, a 3D point-cloud model of the room is obtained. Sound transmission between two selected points...... level of user immersion by a real time acoustical simulation of a dynamic scenes....
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks
DEFF Research Database (Denmark)
Hagen, Espen; Dahmen, David; Stavrinou, Maria L
2016-01-01
on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network......With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...
SIMPLE MODELS OF THREE COUPLED PT -SYMMETRIC WAVE GUIDES ALLOWING FOR THIRD-ORDER EXCEPTIONAL POINTS
Directory of Open Access Journals (Sweden)
Jan Schnabel
2017-12-01
Full Text Available We study theoretical models of three coupled wave guides with a PT-symmetric distribution of gain and loss. A realistic matrix model is developed in terms of a three-mode expansion. By comparing with a previously postulated matrix model it is shown how parameter ranges with good prospects of finding a third-order exceptional point (EP3 in an experimentally feasible arrangement of semiconductors can be determined. In addition it is demonstrated that continuous distributions of exceptional points, which render the discovery of the EP3 difficult, are not only a feature of extended wave guides but appear also in an idealised model of infinitely thin guides shaped by delta functions.
MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
Numerical Solution of Fractional Neutron Point Kinetics Model in Nuclear Reactor
Directory of Open Access Journals (Sweden)
Nowak Tomasz Karol
2014-06-01
Full Text Available This paper presents results concerning solutions of the fractional neutron point kinetics model for a nuclear reactor. Proposed model consists of a bilinear system of fractional and ordinary differential equations. Three methods to solve the model are presented and compared. The first one entails application of discrete Grünwald-Letnikov definition of the fractional derivative in the model. Second involves building an analog scheme in the FOMCON Toolbox in MATLAB environment. Third is the method proposed by Edwards. The impact of selected parameters on the model’s response was examined. The results for typical input were discussed and compared.
Soft modes at the critical end point in the chiral effective models
International Nuclear Information System (INIS)
Fujii, Hirotsugu; Ohtani, Munehisa
2004-01-01
At the critical end point in QCD phase diagram, the scalar, vector and entropy susceptibilities are known to diverge. The dynamic origin of this divergence is identified within the chiral effective models as softening of a hydrodynamic mode of the particle-hole-type motion, which is a consequence of the conservation law of the baryon number and the energy. (author)
Kernel integration scatter model for parallel beam gamma camera and SPECT point source response
International Nuclear Information System (INIS)
Marinkovic, P.M.
2001-01-01
Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...
Using many pilot points and singular value decomposition in groundwater model calibration
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model
Ding, Cody
2016-01-01
The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.
Implementation of the critical points model in a SFM-FDTD code working in oblique incidence
Energy Technology Data Exchange (ETDEWEB)
Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)
2011-06-22
We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.
TARDEC FIXED HEEL POINT (FHP): DRIVER CAD ACCOMMODATION MODEL VERIFICATION REPORT
2017-11-09
Public Release Disclaimer: Reference herein to any specific commercial company, product , process, or service by trade name, trademark, manufacturer , or...not actively engaged HSI until MSB or the Engineering Manufacturing and Development (EMD) Phase, resulting in significant design and cost changes...and shall not be used for advertising or product endorsement purposes. TARDEC Fixed Heel Point (FHP): Driver CAD Accommodation Model Verification
On Lie point symmetry of classical Wess-Zumino-Witten model
International Nuclear Information System (INIS)
Maharana, Karmadeva
2001-06-01
We perform the group analysis of Witten's equations of motion for a particle moving in the presence of a magnetic monopole, and also when constrained to move on the surface of a sphere, which is the classical example of Wess-Zumino-Witten model. We also consider variations of this model. Our analysis gives the generators of the corresponding Lie point symmetries. The Lie symmetry corresponding to Kepler's third law is obtained in two related examples. (author)
A random point process model for the score in sport matches
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2009-01-01
Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf
A model for the two-point velocity correlation function in turbulent channel flow
International Nuclear Information System (INIS)
Sahay, A.; Sreenivasan, K.R.
1996-01-01
A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics
Directory of Open Access Journals (Sweden)
John R. Speakman
2011-11-01
Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
The three-point function as a probe of models for large-scale structure
International Nuclear Information System (INIS)
Frieman, J.A.; Gaztanaga, E.
1993-01-01
The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales
Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384
Knoope, M. M. J.|info:eu-repo/dai/nl/364248149; Guijt, W.; Ramirez, A.|info:eu-repo/dai/nl/284852414; Faaij, A. P. C.
In this study, a new cost model is developed for CO2 pipeline transport, which starts with the physical properties of CO2 transport and includes different kinds of steel grades and up-to-date material and construction costs. This pipeline cost model is used for a new developed tool to determine the
Energy Technology Data Exchange (ETDEWEB)
Omar, M.S., E-mail: dr_m_s_omar@yahoo.com [Department of Physics, College of Science, University of Salahaddin-Erbil, Arbil, Kurdistan (Iraq)
2012-11-15
Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to that of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.
International Nuclear Information System (INIS)
Omar, M.S.
2012-01-01
Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to that of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å 3 for bulk to 57 Å 3 for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10 −6 K −1 for a bulk crystal down to a minimum value of 0.1 × 10 −6 K −1 for a 6 nm diameter nanoparticle.
A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds
Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang
2017-04-01
3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.
Interior Point Methods on GPU with application to Model Predictive Control
DEFF Research Database (Denmark)
Gade-Nielsen, Nicolai Fog
The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...
Identification markings for gemstones
International Nuclear Information System (INIS)
Dreschhoff, G.A.M.; Zeller, E.J.
1980-01-01
A method is described of providing permanent identification markings to gemstones such as diamond crystals by irradiating the cooled gemstone with protons in the desired pattern. The proton bombardment results in a reaction limited to a defined plane and converting the bombarded area of the plane into a different crystal lattice from that of the preirradiated stone. (author)
Gumus, Kutalmis; Erkaya, Halil
2013-04-01
In Terrestrial laser scanning (TLS) applications, it is necessary to take into consideration the conditions that affect the scanning process, especially the general characteristics of the laser scanner, geometric properties of the scanned object (shape, size, etc.), and its spatial location in the environment. Three dimensional models obtained with TLS, allow determining the geometric features and relevant magnitudes of the scanned object in an indirect way. In order to compare the spatial location and geometric accuracy of the 3-dimensional model created by Terrestrial laser scanning, it is necessary to use measurement tools that give more precise results than TLS. Geometric comparisons are performed by analyzing the differences between the distances, the angles between surfaces and the measured values taken from cross-sections between the data from the 3-dimensional model created with TLS and the values measured by other measurement devices The performance of the scanners, the size and shape of the scanned objects are tested using reference objects the sizes of which are determined with high precision. In this study, the important points to consider when choosing reference objects were highlighted. The steps up to processing the point clouds collected by scanning, regularizing these points and modeling in 3 dimensions was presented visually. In order to test the geometric correctness of the models obtained by Terrestrial laser scanners, sample objects with simple geometric shapes such as cubes, rectangular prisms and cylinders that are made of concrete were used as reference models. Three dimensional models were generated by scanning these reference models with Trimble Mensi GS 100. The dimension of the 3D model that is created from point clouds was compared with the precisely measured dimensions of the reference objects. For this purpose, horizontal and vertical cross-sections were taken from the reference objects and generated 3D models and the proximity of
A semi-analytical stationary model of a point-to-plane corona discharge
International Nuclear Information System (INIS)
Yanallah, K; Pontiga, F
2012-01-01
A semi-analytical model of a dc corona discharge is formulated to determine the spatial distribution of charged particles (electrons, negative ions and positive ions) and the electric field in pure oxygen using a point-to-plane electrode system. A key point in the modeling is the integration of Gauss' law and the continuity equation of charged species along the electric field lines, and the use of Warburg's law and the corona current–voltage characteristics as input data in the boundary conditions. The electric field distribution predicted by the model is compared with the numerical solution obtained using a finite-element technique. The semi-analytical solutions are obtained at a negligible computational cost, and provide useful information to characterize and control the corona discharge in different technological applications. (paper)
An analytical model for predicting dryout point in bilaterally heated vertical narrow annuli
International Nuclear Information System (INIS)
Aye Myint; Tian Wenxi; Jia Dounan; Li Zhihui, Li Hao
2005-02-01
Based on the the droplet-diffusion model by Kirillov and Smogalev (1969, 1972), a new analytical model of dryout point prediction in the steam-water flow for bilaterally and uniformly heated narrow annular gap was developed. Comparison of the present model predictions with experimental results indicated that a good agreement in accuracy for the experimental parametric range (pressure from 0.8 to 3.5 MPa, mass flux of 60.39 to 135.6 kg· -2 ·s -1 and the heat flus of 50 kW·m -2 . Prediction of dryout point was experimentally investigated with deionized water upflowing through narrow annular channel with 1.0 mm and 1.5 mm gap heated by AC power supply. (author)
Krijnen, T.F.; Beetz, J.
2017-01-01
In this paper we suggest an extension to the Industry Foundation Classes (IFC) model to integrate point cloud datasets. The proposal includes a schema extension to the core model allowing the storage of points, either as Cartesian coordinates, points in parametric space of associated building
Analysing the distribution of synaptic vesicles using a spatial point process model
DEFF Research Database (Denmark)
Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta
2014-01-01
functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....
DNA denaturation through a model of the partition points on a one-dimensional lattice
International Nuclear Information System (INIS)
Mejdani, R.; Huseini, H.
1994-08-01
We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology. (author). 29 refs, 4 figs
EBT time-dependent point model code: description and user's guide
International Nuclear Information System (INIS)
Roberts, J.F.; Uckan, N.A.
1977-07-01
A D-T time-dependent point model has been developed to assess the energy balance in an EBT reactor plasma. Flexibility is retained in the model to permit more recent data to be incorporated as they become available from the theoretical and experimental studies. This report includes the physics models involved, the program logic, and a description of the variables and routines used. All the files necessary for execution are listed, and the code, including a post-execution plotting routine, is discussed
Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions
Directory of Open Access Journals (Sweden)
uma srivastava
2012-01-01
Full Text Available The paper deals with estimating shift point which occurs in any sequence of independent observations of Poisson model in statistical process control. This shift point occurs in the sequence when i.e. m life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .
CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.
Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola
2011-03-14
Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A mixture model for robust point matching under multi-layer motion.
Directory of Open Access Journals (Sweden)
Jiayi Ma
Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.
Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna
2018-05-01
As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.
Determination of the Number of Fixture Locating Points for Sheet Metal By Grey Model
Directory of Open Access Journals (Sweden)
Yang Bo
2017-01-01
Full Text Available In the process of the traditional fixture design for sheet metal part based on the "N-2-1" locating principle, the number of fixture locating points is determined by trial and error or the experience of the designer. To that end, a new design method based on grey theory is proposed to determine the number of sheet metal fixture locating points in this paper. Firstly, the training sample set is generated by Latin hypercube sampling (LHS and finite element analysis (FEA. Secondly, the GM(1, 1 grey model is constructed based on the established training sample set to approximate the mapping relationship between the number of fixture locating points and the concerned sheet metal maximum deformation. Thirdly, the final number of fixture locating points for sheet metal can be inversely calculated under the allowable maximum deformation. Finally, a sheet metal case is conducted and the results indicate that the proposed approach is effective and efficient in determining the number of fixture locating points for sheet metal.
Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.
Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A
2018-04-01
Diarrhea is the second leading cause of death in children confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.
Directory of Open Access Journals (Sweden)
Hosein Ghaffarzadeh
Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.
Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.
Afifi, Akram; El-Rabbany, Ahmed
2015-06-19
This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.
International Nuclear Information System (INIS)
Fischer, S.R.; Lam, K.; Lin, J.C.
1991-01-01
This paper summarizes the results of an assessment of our TRAC-PF1/MOD3 Mark-22 prototype fuel assembly model against single-assembly data obtained from the ''A'' Tank single-assembly tests that were performed at the Savannah River Laboratory. We felt the data characterize prototypic assembly behavior over a range of air-water flow conditions of interest for loss-of-coolant accident (LOCA) calculations. This study was part of a benchmarking effort performed to evaluate and validate a multiple-assembly, full-plant model that is being developed by Los Alamos National Laboratory to study various aspects of the Savannah River plant operating conditions, including LOCA transients, using TRAC-PF1/MOD3 Version 1.10. The results of this benchmarking effort demonstrate that TRAC-PF1/MOD3 is capable pf calculating plenum conditions and assembly flows during conditions thought to be typical of the Emergency Cooling System (ECS) phase of a LOCA. 10 refs., 12 fig
Asymptotic behaviour of two-point functions in multi-species models
Directory of Open Access Journals (Sweden)
Karol K. Kozlowski
2016-05-01
Full Text Available We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU(3-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.
Publicly available models to predict normal boiling point of organic compounds
International Nuclear Information System (INIS)
Oprisiu, Ioana; Marcou, Gilles; Horvath, Dragos; Brunel, Damien Bernard; Rivollet, Fabien; Varnek, Alexandre
2013-01-01
Quantitative structure–property models to predict the normal boiling point (T b ) of organic compounds were developed using non-linear ASNNs (associative neural networks) as well as multiple linear regression – ISIDA-MLR and SQS (stochastic QSAR sampler). Models were built on a diverse set of 2098 organic compounds with T b varying in the range of 185–491 K. In ISIDA-MLR and ASNN calculations, fragment descriptors were used, whereas fragment, FPTs (fuzzy pharmacophore triplets), and ChemAxon descriptors were employed in SQS models. Prediction quality of the models has been assessed in 5-fold cross validation. Obtained models were implemented in the on-line ISIDA predictor at (http://infochim.u-strasbg.fr/webserv/VSEngine.html)
Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo
2013-01-01
We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.
Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas
Bahrami, Parviz A.
2012-01-01
A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.
Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies
Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger
2018-02-01
Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.
Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger
2018-02-01
Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995)JSTPBS0022-471510.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012)PLEEE81539-375510.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Li, Guohui; Zhang, Songling; Yang, Hong
2017-01-01
Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
International Nuclear Information System (INIS)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Analysis of divertor asymmetry using a simple five-point model
International Nuclear Information System (INIS)
Hayashi, Nobuhiko; Takizuka, Tomonori; Hatayama, Akiyoshi; Ogasawara, Masatada.
1997-03-01
A simple five-point model of the scrape-off layer (SOL) plasma outside the separatrix of a diverted tokamak has been developed to study the inside/outside divertor asymmetry. The SOL current, gas pumping/puffing in the divertor region, and divertor plate biasing are included in this model. Gas pumping/puffing and biasing are shown to control divertor asymmetry. In addition, the SOL current is found to form asymmetric solutions without external controls of gas pumping/puffing and biasing. (author)
Model Predictive Control of Z-source Neutral Point Clamped Inverter
DEFF Research Database (Denmark)
Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede
2011-01-01
This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...
Kinetic model for electric-field induced point defect redistribution near semiconductor surfaces
Gorai, Prashun; Seebauer, Edmund G.
2014-07-01
The spatial distribution of point defects near semiconductor surfaces affects the efficiency of devices. Near-surface band bending generates electric fields that influence the spatial redistribution of charged mobile defects that exchange infrequently with the lattice, as recently demonstrated for pile-up of isotopic oxygen near rutile TiO2 (110). The present work derives a mathematical model to describe such redistribution and establishes its temporal dependence on defect injection rate and band bending. The model shows that band bending of only a few meV induces significant redistribution, and that the direction of the electric field governs formation of either a valley or a pile-up.
Kinetic model for electric-field induced point defect redistribution near semiconductor surfaces
International Nuclear Information System (INIS)
Gorai, Prashun; Seebauer, Edmund G.
2014-01-01
The spatial distribution of point defects near semiconductor surfaces affects the efficiency of devices. Near-surface band bending generates electric fields that influence the spatial redistribution of charged mobile defects that exchange infrequently with the lattice, as recently demonstrated for pile-up of isotopic oxygen near rutile TiO 2 (110). The present work derives a mathematical model to describe such redistribution and establishes its temporal dependence on defect injection rate and band bending. The model shows that band bending of only a few meV induces significant redistribution, and that the direction of the electric field governs formation of either a valley or a pile-up.
GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition
Zhen, Z.; Jia, X.
2014-12-01
Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the
Schottky’s conjecture, field emitters, and the point charge model
Directory of Open Access Journals (Sweden)
Kevin L. Jensen
2016-06-01
Full Text Available A Point Charge Model of conical field emitters, in which the emitter is defined by an equipotential surface of judiciously placed charges over a planar conductor, is used to confirm Schottky’s conjecture that field enhancement factors are multiplicative for a small protrusion placed on top of a larger base structure. Importantly, it is shown that Schottky’s conjecture for conical / ellipsoidal field emitters remains unexpectedly valid even when the dimensions of the protrusion begin to approach the dimensions of the base structure. The model is analytic and therefore the methodology is extensible to other configurations.
The environmental zero-point problem in evolutionary reaction norm modeling.
Ergon, Rolf
2018-04-01
There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling
Kaur, Amandeep; Singh Mann, Kulwinder, Dr.
2018-01-01
Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.
Spacing distribution functions for 1D point island model with irreversible attachment
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
Ideal point error for model assessment in data-driven river flow forecasting
Directory of Open Access Journals (Sweden)
C. W. Dawson
2012-08-01
Full Text Available When analysing the performance of hydrological models in river forecasting, researchers use a number of diverse statistics. Although some statistics appear to be used more regularly in such analyses than others, there is a distinct lack of consistency in evaluation, making studies undertaken by different authors or performed at different locations difficult to compare in a meaningful manner. Moreover, even within individual reported case studies, substantial contradictions are found to occur between one measure of performance and another. In this paper we examine the ideal point error (IPE metric – a recently introduced measure of model performance that integrates a number of recognised metrics in a logical way. Having a single, integrated measure of performance is appealing as it should permit more straightforward model inter-comparisons. However, this is reliant on a transferrable standardisation of the individual metrics that are combined to form the IPE. This paper examines one potential option for standardisation: the use of naive model benchmarking.
A Dynamic Model of the Environmental Kuznets Curve. Turning Point and Public Policy
Energy Technology Data Exchange (ETDEWEB)
Egli, H.; Steger, T.M. [CER-ETH Center of Economic Research at ETH Zurich, ZUE F 10, CH-8092 Zurich (Switzerland)
2007-01-15
We set up a simple dynamic macroeconomic model with (1) polluting consumption and a preference for a clean environment, (2) increasing returns in abatement giving rise to an EKC and (3) sustained growth resulting from a linear final-output technology. There are two sorts of market failures caused by external effects associated with consumption and environmental effort. The model is employed to investigate the determinants of the turning point and the cost effectiveness of different public policies aimed at a reduction of the environmental burden. Moreover, the model offers a potential explanation of an N-shaped pollution-income relation. It is shown that the model is compatible with most empirical regularities on economic growth and the environment.
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
Mark Twain: inocente ou pecador? = Mark Twain: innocent or sinner?
Directory of Open Access Journals (Sweden)
Heloisa Helou Doca
2009-01-01
Full Text Available A leitura cuidadosa do texto do “Tratado de Paris”, em 1900, leva Mark Twain a concluir que a intenção política norte-americana era, claramente, a de subjugação. Declara-se, abertamente, antiimperialista, nesse momento, apesar das inúmeras críticasrecebidas por antagonistas políticos que defendiam o establishment dos Estados Unidos. Após viajar para a Europa e Oriente, em 1867, como correspondente do jornal Daily Alta Califórnia, Mark Twain publica, em 1869, seu relato de viagem, The Innocents Abroad or TheNew Pilgrim’s Progress. Nosso estudo demonstra que o autor, apesar das diversas máscaras usadas em seus relatos, narra histórias, culturas e tradições, tanto da Europa quanto do Oriente, já com os olhos bem abertos pelo viés antiimperialista. Faz uso da paródia, sátira, ironia e humor para dessacralizar impérios, monarcas e a Igreja que subjugavam os mais fracos, iluminando, desde então, os estudos sobre culturas. Nosso estudo, outrossim, faz uma reflexão sobre cultura, tradição e o olhar do viajante, justificando o “olhar inocente” do narrador em seu relato.After carefully reading the Treaty of Paris in 1900, Mark Twain concluded that the goal of U.S. policy was clearly one ofsubjugation. He openly declared himself an anti-imperialist at that time, in spite of the numerous criticisms he received from political opponents who supported the United States status quo. After traveling to Europe and the East in 1867 as a correspondent for The DailyAlta California newspaper, Mark Twain published his travel report, The Innocents Abroad or The New Pilgrim’s Progress in 1869. Our study demonstrates that the author, in spite of using different guises in his reports, narrated histories, cultures and traditions – from both Europe and the East – with a viewpoint already imbued by his anti-imperialistic ideals. Twain made use of parody, satire, irony and humor within his texts in order to desecrate empires,monarchs and
Directory of Open Access Journals (Sweden)
Otani K
2018-03-01
Full Text Available Koichi Otani, Akihito Suzuki, Yoshihiko Matsumoto, Toshinori Shirata Department of Psychiatry, Yamagata University School of Medicine, Yamagata, Japan Objective: The cognitive model of depression posits two distinctive personality vulnerabilities termed sociotropy and autonomy, each of which is composed of a cluster of maladaptive self-schemas. It is postulated that negative core beliefs about self underlie maladaptive self-schemas as a whole, whereas those about others may be implicated in the autonomous self-schemas. Therefore, the present study examined the relations of sociotropy and autonomy with core beliefs about self and others.Methods: The sample of this study consisted of 321 healthy Japanese volunteers. Sociotropy and autonomy were evaluated by the corresponding subscales of the Sociotropy–Autonomy Scale. Core beliefs about self and others were assessed by the negative-self, positive-self, negative-other and positive-other subscales of the Brief Core Schema Scales.Results: In the forced multiple regression analysis, sociotropy scores were correlated with negative-self scores (β = 0.389, P < 0.001. Meanwhile, autonomy scores were correlated with positive-self scores (β = 0.199, P < 0.01 and negative-other scores (β = 0.191, P < 0.01.Conclusion: The present study suggests marked differences in core beliefs about self and others between sociotropy and autonomy, further contrasting the two personality vulnerabilities to depression. Keywords: sociotropy, autonomy, core belief, self, other, personality, cognitive vulnerability
Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model
Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi
2018-02-01
Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.
Customer Order Decoupling Point Selection Model in Mass Customization Based on MAS
Institute of Scientific and Technical Information of China (English)
XU Xuanguo; LI Xiangyang
2006-01-01
Mass customization relates to the ability of providing individually designed products or services to customer with high process flexibility or integration. Literatures on mass customization have been focused on mechanism of MC, but little on customer order decoupling point selection. The aim of this paper is to present a model for customer order decoupling point selection of domain knowledge interactions between enterprises and customers in mass customization. Based on the analysis of other researchers' achievements combining the demand problems of customer and enterprise, a model of group decision for customer order decoupling point selection is constructed based on quality function deployment and multi-agent system. Considering relatively the decision makers of independent functional departments as independent decision agents, a decision agent set is added as the third dimensionality to house of quality, the cubic quality function deployment is formed. The decision-making can be consisted of two procedures: the first one is to build each plane house of quality in various functional departments to express each opinions; the other is to evaluate and gather the foregoing sub-decisions by a new plane quality function deployment. Thus, department decision-making can well use its domain knowledge by ontology, and total decision-making can keep simple by avoiding too many customer requirements.
The monodromy property for K3 surfaces allowing a triple-point-free model
DEFF Research Database (Denmark)
Jaspers, Annelies Kristien J
2017-01-01
The aim of this thesis is to study under which conditions K3 surfaces allowing a triple-point-free model satisfy the monodromy property. This property is a quantitative relation between the geometry of the degeneration of a Calabi-Yau variety X and the monodromy action on the cohomology of...... X: a Calabi- Yau variety X satisfies the monodromy property if poles of the motivic zeta function ZX,ω(T) induce monodromy eigenvalues on the cohomology of X. Let k be an algebraically closed field of characteristic 0, and set K = k((t)). In this thesis, we focus on K3 surfaces over K allowing a triple-point...... is very precise, which allows to use a combination of geometrical and combinatorial techniques to check the monodromy property in practice. The first main result is an explicit computation of the poles of ZX,ω(T) for a K3 surface X allowing a triple-point-free model and a volume form ! on X. We show that...
International Nuclear Information System (INIS)
Morel, F.
1997-01-01
The use of radioactive isotopes as tracers in biology has been developed thanks to the economic generation of the required isotopes in accelerators and nuclear reactors, and to the multiple applications of tracers in the life domain; the most usual isotopes employed in biology are carbon, hydrogen, phosphorus and sulfur isotopes, because these elements are present in most of organic molecules. Most of the life science knowledge appears to be dependent to the extensive use of nuclear tools and radioactive tracers; the example of the utilization of radioactive phosphorus marked ATP to study the multiple reactions with proteins, nucleic acids, etc., is given
Ceremony marking Einstein Year
2005-01-01
Sunday 13th November at 10:00amat Geneva's St. Peter's Cathedral To mark Einstein Year and the importance of the intercultural dialogue of which it forms a part, a religious service will take place on Sunday 13 November at 10 a.m. in St. Peter's Cathedral, to which CERN members and colleagues are warmly welcomed. Pastor Henry Babel, senior minister at the Cathedral, will speak on the theme: 'God in Einstein's Universe'. Diether Blechschmidt will convey a message on behalf of the scientific community.
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper
We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...
Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.
Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R
2013-01-02
The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.
Using a dynamic point-source percolation model to simulate bubble growth
International Nuclear Information System (INIS)
Zimmerman, Jonathan A.; Zeigler, David A.; Cowgill, Donald F.
2004-01-01
Accurate modeling of nucleation, growth and clustering of helium bubbles within metal tritide alloys is of high scientific and technological importance. Of interest is the ability to predict both the distribution of these bubbles and the manner in which these bubbles interact at a critical concentration of helium-to-metal atoms to produce an accelerated release of helium gas. One technique that has been used in the past to model these materials, and again revisited in this research, is percolation theory. Previous efforts have used classical percolation theory to qualitatively and quantitatively model the behavior of interstitial helium atoms in a metal tritide lattice; however, higher fidelity models are needed to predict the distribution of helium bubbles and include features that capture the underlying physical mechanisms present in these materials. In this work, we enhance classical percolation theory by developing the dynamic point-source percolation model. This model alters the traditionally binary character of site occupation probabilities by enabling them to vary depending on proximity to existing occupied sites, i.e. nucleated bubbles. This revised model produces characteristics for one and two dimensional systems that are extremely comparable with measurements from three dimensional physical samples. Future directions for continued development of the dynamic model are also outlined
Minimal Marking: A Success Story
McNeilly, Anne
2014-01-01
The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…
Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong
2016-04-01
The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.
AUTOMATED VOXEL MODEL FROM POINT CLOUDS FOR STRUCTURAL ANALYSIS OF CULTURAL HERITAGE
Directory of Open Access Journals (Sweden)
G. Bitelli
2016-06-01
Full Text Available In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy that was hit by an earthquake in 2012.
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2018-04-01
We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.
Once more on the equilibrium-point hypothesis (lambda model) for motor control.
Feldman, A G
1986-03-01
The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.
The three-point correlation function of the cosmic microwave background in inflationary models
Gangui, Alejandro; Matarrese, Sabino; Mollerach, Silvia
1994-01-01
We analyze the temperature three-point correlation function and the skewness of the Cosmic Microwave Background (CMB), providing general relations in terms of multipole coefficients. We then focus on applications to large angular scale anisotropies, such as those measured by the {\\em COBE} DMR, calculating the contribution to these quantities from primordial, inflation generated, scalar perturbations, via the Sachs--Wolfe effect. Using the techniques of stochastic inflation we are able to provide a {\\it universal} expression for the ensemble averaged three-point function and for the corresponding skewness, which accounts for all primordial second-order effects. These general expressions would moreover apply to any situation where the bispectrum of the primordial gravitational potential has a {\\em hierarchical} form. Our results are then specialized to a number of relevant models: power-law inflation driven by an exponential potential, chaotic inflation with a quartic and quadratic potential and a particular c...
Classical dynamics of the Abelian Higgs model from the critical point and beyond
Directory of Open Access Journals (Sweden)
G.C. Katsimiga
2015-09-01
Full Text Available We present two different families of solutions of the U(1-Higgs model in a (1+1 dimensional setting leading to a localization of the gauge field. First we consider a uniform background (the usual vacuum, which corresponds to the fully higgsed-superconducting phase. Then we study the case of a non-uniform background in the form of a domain wall which could be relevantly close to the critical point of the associated spontaneous symmetry breaking. For both cases we obtain approximate analytical nodeless and nodal solutions for the gauge field resulting as bound states of an effective Pöschl–Teller potential created by the scalar field. The two scenaria differ only in the scale of the characteristic localization length. Numerical simulations confirm the validity of the obtained analytical solutions. Additionally we demonstrate how a kink may be used as a mediator driving the dynamics from the critical point and beyond.
Singular Spectrum Near a Singular Point of Friedrichs Model Operators of Absolute Type
International Nuclear Information System (INIS)
Iakovlev, Serguei I.
2006-01-01
In L 2 (R) we consider a family of self adjoint operators of the Friedrichs model: A m =|t| m +V. Here |t| m is the operator of multiplication by the corresponding function of the independent variable t element of R, and (perturbation) is a trace-class integral operator with a continuous Hermitian kernel ν(t,x) satisfying some smoothness condition. These absolute type operators have one singular point of order m>0. Conditions on the kernel ν(t,x) are found guaranteeing the absence of the point spectrum and the singular continuous one of such operators near the origin. These conditions are actually necessary and sufficient. They depend on the finiteness of the rank of a perturbation operator and on the order of singularity. The sharpness of these conditions is confirmed by counterexamples
Kinetic modeling of particle acceleration in a solar null point reconnection region
DEFF Research Database (Denmark)
Baumann, Gisela; Haugbølle, Troels; Nordlund, Åke
2013-01-01
The primary focus of this paper is on the particle acceleration mechanism in solar coronal 3D reconnection null-point regions. Starting from a potential field extrapolation of a SOHO magnetogram taken on 2002 November 16, we first performed MHD simulations with horizontal motions observed by SOHO...... particles and 3.5 billion grid cells of size 17.5\\,km --- these simulations offer a new opportunity to study particle acceleration in solar-like settings....... applied to the photospheric boundary of the computational box. After a build-up of electric current in the fan-plane of the null-point, a sub-section of the evolved MHD data was used as initial and boundary conditions for a kinetic particle-in-cell model of the plasma. We find that sub...
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
Arrighi, Chiara; Campo, Lorenzo
2017-04-01
In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.
Scission-point model of nuclear fission based on deformed-shell effects
International Nuclear Information System (INIS)
Wilkins, B.D.; Steinberg, E.P.; Chasman, R.R.
1976-01-01
A static model of nuclear fission is proposed based on the assumption of statistical equilibrium among collective degrees of freedom at the scission point. The relative probabilities of formation of complementary fission fragment pairs are determined from the relative potential energies of a system of two nearly touching, coaxial spheroids with quadrupole deformations. The total potential energy of the system at the scission point is calculated as the sum of liquid-drop and shell- and pairing-correction terms for each spheroid, and Coulomb and nuclear potential terms describing the interaction between them. The fissioning system at the scission point is characterized by three parameters: the distance between the tips of the spheroids (d), the intrinsic excitation energy of the fragments (tau/sub int/), and a collective temperature (T/sub coll/). No attempt is made to adjust these parameters to give optimum fits to experimental data, but rather, a single choice of values for d, tau/sub int/, and T/sub coll/ is used in the calculations for all fissioning systems. The general trends of the distributions of mass, nuclear charge, and kinetic energy in the fission of a wide range of nuclides from Po to Fm are well reproduced in the calculations. The major influence of the deformed-shell corrections for neutrons is indicated and provides a convenient framework for the interpretation of observed trends in the data and for the prediction of new results. The scission-point configurations derived from the model provide an interpretation of the ''saw-tooth'' neutron emission curve as well as previously unexplained observations on the variation of TKE for isotopes of U, Pu, Cm, and Cf; structure in the width of total kinetic energy release as a function of fragment mass ratio; and a difference in threshold energies for symmetric and asymmetric mass splits in the fission of Ra and Ac isotopes
Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.
2004-01-01
Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management
Directory of Open Access Journals (Sweden)
ALi Hassan Abuzaid
2013-12-01
Full Text Available If the interest is to calibrate two instruments then the functional relationship model is more appropriate than regression models. Fitting a straight line when both variables are circular and subject to errors has not received much attention. In this paper, we consider the problem of detecting influential points in two functional relationship models for circular variables. The first is based on the simple circular regression the (SC, while the last is derived from the complex linear regression the (CL. The covariance matrices are derived and then the COVRATIO statistics are formulated for both models. The cut-off points are obtained and the power of performance is assessed via simulation studies. The performance of COVRATIO statistics depends on the concentration of error, sample size and level of contamination. In the case of linear relationship between two circular variables COVRATIO statistics of the (SC model performs better than the (CL. On the other hand, a novel diagram, the so-called spoke plot, is utilized to detect possible influential points For illustration purposes, the proposed procedures are applied on real data of wind directions measured by two different instruments. COVRATIO statistics and the spoke plot were able to identify two observations as influential points. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;}
Simulation of agricultural non-point source pollution in Xichuan by using SWAT model
Xing, Linan; Zuo, Jiane; Liu, Fenglin; Zhang, Xiaohui; Cao, Qiguang
2018-02-01
This paper evaluated the applicability of using SWAT to access agricultural non-point source pollution in Xichuan area. In order to build the model, DEM, soil sort and land use map, climate monitoring data were collected as basic database. The SWAT model was calibrated and validated for the SWAT was carried out using streamflow, suspended solids, total phosphorus and total nitrogen records from 2009 to 2011. Errors, coefficient of determination and Nash-Sutcliffe coefficient were considered to evaluate the applicability. The coefficient of determination were 0.96, 0.66, 0.55 and 0.66 for streamflow, SS, TN, and TP, respectively. Nash-Sutcliffe coefficient were 0.93, 0.5, 0.52 and 0.63, respectively. The results all meet the requirements. It suggested that the SWAT model can simulate the study area.
Caulkins, Jonathan P.; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F.; Kort, Peter M.; Novak, Andreas J.; Seidl, Andrea
2013-01-01
We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure’s actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader. PMID:23565027
Caulkins, Jonathan P; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F; Kort, Peter M; Novak, Andreas J; Seidl, Andrea
2013-03-16
We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure's actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader.
Geometrical origin of tricritical points of various U(1) lattice models
International Nuclear Information System (INIS)
Janke, W.; Kleiert, H.
1989-01-01
The authors review the dual relationship between various compact U(1) lattice models and Abelian Higgs models, the latter being the disorder field theories of line-like topological excitations in the system. The authors point out that the predicted first-order transitions in the Abelian Higgs models (Coleman-Weinberg mechanism) are, in three dimensions, in contradiction with direct numerical investigations in the compact U(1) formulation since these yield continuous transitions in the major part of the phase diagram. In four dimensions, there are indications from Monte Carlo data for a similar situation. Concentrating on the strong-coupling expansion in terms of geometrical objects, surfaces or lines, with certain statistical weights, the authors present semi-quantitative arguments explaining the observed cross-over from first-order to continuous transitions by the balance between the lowest two weights (2:1 ratio) of these geometrical objects
Surrogate runner model for draft tube losses computation within a wide range of operating points
International Nuclear Information System (INIS)
Susan-Resiga, R; Ciocan, T; Muntean, S; De Colombel, T; Leroy, P
2014-01-01
We introduce a quasi two-dimensional (Q2D) methodology for assessing the swirling flow exiting the runner of hydraulic turbines at arbitrary operating points, within a wide operating range. The Q2D model does not need actual runner computations, and as a result it represents a surrogate runner model for a-priori assessment of the swirling flow ingested by the draft tube. The axial, radial and circumferential velocity components are computed on a conical section located immediately downstream the runner blades trailing edge, then used as inlet conditions for regular draft tube computations. The main advantage of our model is that it allows the determination of the draft tube losses within the intended turbine operating range in the early design stages of a new or refurbished runner, thus providing a robust and systematic methodology to meet the optimal requirements for the flow at the runner outlet
Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point
Energy Technology Data Exchange (ETDEWEB)
Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)
2016-12-15
We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.
SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties
Energy Technology Data Exchange (ETDEWEB)
Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)
2014-07-01
Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Laiolo, Paola; Gabellani, Simone; Rudari, Roberto; Boni, Giorgio; Puca, Silvia
2013-04-01
Soil moisture plays a fundamental role in the partitioning of mass and energy fluxes between land surface and atmosphere, thereby influencing climate and weather, and it is important in determining the rainfall-runoff response of catchments; moreover, in hydrological modelling and flood forecasting, a correct definition of moisture conditions is a key factor for accurate predictions. Different sources of information for the estimation of the soil moisture state are currently available: satellite data, point measurements and model predictions. All are affected by intrinsic uncertainty. Among different satellite sensors that can be used for soil moisture estimation three major groups can be distinguished: passive microwave sensors (e.g., SSMI), active sensors (e.g. SAR, Scatterometers), and optical sensors (e.g. Spectroradiometers). The last two families, mainly because of their temporal and spatial resolution seem the most suitable for hydrological applications In this work soil moisture point measurements from 10 sensors in the Italian territory are compared of with the satellite products both from the HSAF project SM-OBS-2, derived from the ASCAT scatterometer, and from ACHAB, an operative energy balance model that assimilate LST data derived from MSG and furnishes daily an evaporative fraction index related to soil moisture content for all the Italian region. Distributed comparison of the ACHAB and SM-OBS-2 on the whole Italian territory are performed too.
METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS
Directory of Open Access Journals (Sweden)
E. V. Dikareva
2015-01-01
Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.
A Low-Cost Maximum Power Point Tracking System Based on Neural Network Inverse Model Controller
Directory of Open Access Journals (Sweden)
Carlos Robles Algarín
2018-01-01
Full Text Available This work presents the design, modeling, and implementation of a neural network inverse model controller for tracking the maximum power point of a photovoltaic (PV module. A nonlinear autoregressive network with exogenous inputs (NARX was implemented in a serial-parallel architecture. The PV module mathematical modeling was developed, a buck converter was designed to operate in the continuous conduction mode with a switching frequency of 20 KHz, and the dynamic neural controller was designed using the Neural Network Toolbox from Matlab/Simulink (MathWorks, Natick, MA, USA, and it was implemented on an open-hardware Arduino Mega board. To obtain the reference signals for the NARX and determine the 65 W PV module behavior, a system made of a 0.8 W PV cell, a temperature sensor, a voltage sensor and a static neural network, was used. To evaluate performance a comparison with the P&O traditional algorithm was done in terms of response time and oscillations around the operating point. Simulation results demonstrated the superiority of neural controller over the P&O. Implementation results showed that approximately the same power is obtained with both controllers, but the P&O controller presents oscillations between 7 W and 10 W, in contrast to the inverse controller, which had oscillations between 1 W and 2 W.
Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.
2017-11-01
Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.
The scalar-scalar-tensor inflationary three-point function in the axion monodromy model
Chowdhury, Debika; Sreenath, V.; Sriramkumar, L.
2016-11-01
The axion monodromy model involves a canonical scalar field that is governed by a linear potential with superimposed modulations. The modulations in the potential are responsible for a resonant behavior which gives rise to persisting oscillations in the scalar and, to a smaller extent, in the tensor power spectra. Interestingly, such spectra have been shown to lead to an improved fit to the cosmological data than the more conventional, nearly scale invariant, primordial power spectra. The scalar bi-spectrum in the model too exhibits continued modulations and the resonance is known to boost the amplitude of the scalar non-Gaussianity parameter to rather large values. An analytical expression for the scalar bi-spectrum had been arrived at earlier which, in fact, has been used to compare the model with the cosmic microwave background anisotropies at the level of three-point functions involving scalars. In this work, with future applications in mind, we arrive at a similar analytical template for the scalar-scalar-tensor cross-correlation. We also analytically establish the consistency relation (in the squeezed limit) for this three-point function. We conclude with a summary of the main results obtained.
The scalar-scalar-tensor inflationary three-point function in the axion monodromy model
International Nuclear Information System (INIS)
Chowdhury, Debika; Sriramkumar, L.; Sreenath, V.
2016-01-01
The axion monodromy model involves a canonical scalar field that is governed by a linear potential with superimposed modulations. The modulations in the potential are responsible for a resonant behavior which gives rise to persisting oscillations in the scalar and, to a smaller extent, in the tensor power spectra. Interestingly, such spectra have been shown to lead to an improved fit to the cosmological data than the more conventional, nearly scale invariant, primordial power spectra. The scalar bi-spectrum in the model too exhibits continued modulations and the resonance is known to boost the amplitude of the scalar non-Gaussianity parameter to rather large values. An analytical expression for the scalar bi-spectrum had been arrived at earlier which, in fact, has been used to compare the model with the cosmic microwave background anisotropies at the level of three-point functions involving scalars. In this work, with future applications in mind, we arrive at a similar analytical template for the scalar-scalar-tensor cross-correlation. We also analytically establish the consistency relation (in the squeezed limit) for this three-point function. We conclude with a summary of the main results obtained.
Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.
Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E
2010-08-01
Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.
Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection
Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.
2016-06-01
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION
Directory of Open Access Journals (Sweden)
L. Díaz-Vilariño
2016-06-01
Full Text Available In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
Fixed point and anomaly mediation in partial {\\boldsymbol{N}}=2 supersymmetric standard models
Yin, Wen
2018-01-01
Motivated by the simple toroidal compactification of extra-dimensional SUSY theories, we investigate a partial N = 2 supersymmetric (SUSY) extension of the standard model which has an N = 2 SUSY sector and an N = 1 SUSY sector. We point out that below the scale of the partial breaking of N = 2 to N = 1, the ratio of Yukawa to gauge couplings embedded in the original N = 2 gauge interaction in the N = 2 sector becomes greater due to a fixed point. Since at the partial breaking scale the sfermion masses in the N = 2 sector are suppressed due to the N = 2 non-renormalization theorem, the anomaly mediation effect becomes important. If dominant, the anomaly-induced masses for the sfermions in the N = 2 sector are almost UV-insensitive due to the fixed point. Interestingly, these masses are always positive, i.e. there is no tachyonic slepton problem. From an example model, we show interesting phenomena differing from ordinary MSSM. In particular, the dark matter particle can be a sbino, i.e. the scalar component of the N = 2 vector multiplet of {{U}}{(1)}Y. To obtain the correct dark matter abundance, the mass of the sbino, as well as the MSSM sparticles in the N = 2 sector which have a typical mass pattern of anomaly mediation, is required to be small. Therefore, this scenario can be tested and confirmed in the LHC and may be further confirmed by the measurement of the N = 2 Yukawa couplings in future colliders. This model can explain dark matter, the muon g-2 anomaly, and gauge coupling unification, and relaxes some ordinary problems within the MSSM. It is also compatible with thermal leptogenesis.
New models to compute solar global hourly irradiation from point cloudiness
International Nuclear Information System (INIS)
Badescu, Viorel; Dumitrescu, Alexandru
2013-01-01
Highlights: ► Kasten–Czeplak cloudy sky model is tested under the climate of South-Eastern Europe. ► Very simple cloudy sky models based on atmospheric transmission factors. ► Transmission factors are nonlinear functions of the cosine of zenith angle. ► New models’ performance is good for low and intermediate cloudy skies. ► Models show good performance when applied in stations other than the origin station. - Abstract: The Kasten–Czeplak (KC) model [16] is tested against data measured in five meteorological stations covering the latitudes and longitudes of Romania (South-Eastern Europe). Generally, the KC cloudy sky model underestimates the measured values. Its performance is (marginally) good enough for point cloudiness C = 0–1. The performance is good for skies with few clouds (C < 0.3), good enough for skies with medium amount of clouds (C = 0.3–0.7) and poor on very cloudy and overcast skies. New very simple empirical cloudy sky models are proposed. They bring two novelties in respect to KC model. First, new basic clear sky models are used, which evaluate separately the direct and diffuse radiation, respectively. Second, some of the new models assume the atmospheric transmission factor is a nonlinear function of the cosine of zenith angle Z. The performance of the new models is generally better than that of the KC model, for all cloudiness classes. One class of models (called S4) has been further tested. The sub-model S4TOT has been obtained by fitting the generic model S4 to all available data, for all stations. Generally, S4TOT has good accuracy in all stations, for low and intermediate cloudy skies (C < 0.7). The accuracy of S4TOT is good and good enough at intermediate zenith angles (Z = 30–70°) but worse for small and larger zenith angles (Z = 0–30° and Z = 70–85°, respectively). Several S4 sub-models were tested in stations different from the origin station. Almost all sub-models have good or good enough performance for skies
Primordial blackholes and gravitational waves for an inflection-point model of inflation
Energy Technology Data Exchange (ETDEWEB)
Choudhury, Sayantan [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108 (India); Mazumdar, Anupam [Consortium for Fundamental Physics, Physics Department, Lancaster University, LA1 4YB (United Kingdom)
2014-06-02
In this article we provide a new closed relationship between cosmic abundance of primordial gravitational waves and primordial blackholes that originated from initial inflationary perturbations for inflection-point models of inflation where inflation occurs below the Planck scale. The current Planck constraint on tensor-to-scalar ratio, running of the spectral tilt, and from the abundance of dark matter content in the universe, we can deduce a strict bound on the current abundance of primordial blackholes to be within a range, 9.99712×10{sup −3}<Ω{sub PBH}h{sup 2}<9.99736×10{sup −3}.
A business process model as a starting point for tight cooperation among organizations
Directory of Open Access Journals (Sweden)
O. Mysliveček
2006-01-01
Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success.
Zero-point energies in the two-center shell model. II
International Nuclear Information System (INIS)
Reinhard, P.-G.
1978-01-01
The zero-point energy (ZPE) contained in the potential-energy surface of a two-center shell model (TCSM) is evaluated. In extension of previous work, the author uses here the full TCSM with l.s force, smoothing and asymmetry. The results show a critical dependence on the height of the potential barrier between the centers. The ZPE turns out to be non-negligible along the fission path for 236 U, and even more so for lighter systems. It is negligible for surface quadrupole motion and it is just on the fringe of being negligible for motion along the asymmetry coordinate. (Auth.)
Zero-point energies in the two-center shell model
International Nuclear Information System (INIS)
Reinhard, P.G.
1975-01-01
The zero-point energies (ZPE) contained in the potential-energy surfaces (PES) of a two-center shell model are evaluated. For the c.m. motion of the system as a whole the kinetic ZPE was found to be negligible, whereas it varies appreciably for the rotational and oscillation modes (about 5-9MeV). For the latter two modes the ZPE also depends sensitively on the changing pairing structure, which can induce strong local fluctuations, particularly in light nuclei. The potential ZPE is very small for heavy nuclei, but might just become important in light nuclei. (Auth.)
Dynamical simulation of a linear sigma model near the critical point
Energy Technology Data Exchange (ETDEWEB)
Wesp, Christian; Meistrenko, Alex; Greiner, Carsten [Institut fuer Theoretische Physik, Goethe-Universitaet Frankfurt, Max-von-Laue-Strasse 1, D-60438 Frankfurt (Germany); Hees, Hendrik van [Frankfurt Institute for Advanced Studies, Ruth-Moufang-Strasse 1, D-60438 Frankfurt (Germany)
2014-07-01
The intention of this study is the search for signatures of the chiral phase transition. To investigate the impact of fluctuations, e.g. of the baryon number, on the transition or a critical point, the linear sigma model is treated in a dynamical 3+1D numerical simulation. Chiral fields are approximated as classical fields, quarks are described by quasi particles in a Vlasov equation. Additional dynamic is implemented by quark-quark and quark-sigma-field interaction. For a consistent description of field-particle interactions, a new Monte-Carlo-Langevin-like formalism has been developed and is discussed.
A note on a boundary sine-Gordon model at the free-Fermion point
Murgan, Rajan
2018-02-01
We investigate the free-Fermion point of a boundary sine-Gordon model with nondiagonal boundary interactions for the ground state using auxiliary functions obtained from T - Q equations of a corresponding inhomogeneous open spin-\\frac{1}{2} XXZ chain with nondiagonal boundary terms. In particular, we obtain the Casimir energy. Our result for the Casimir energy is shown to agree with the result from the TBA approach. The analytical result for the effective central charge in the ultraviolet (UV) limit is also verified from the plots of effective central charge for intermediate values of volume.
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
Theory of fluctuations and parametric noise in a point nuclear reactor model
International Nuclear Information System (INIS)
Rodriguez, M.A.; San Miguel, M.; Sancho, J.M.
1984-01-01
We present a joint description of internal fluctuations and parametric noise in a point nuclear reactor model in which delayed neutrons and a detector are considered. We obtain kinetic equations for the first moments and define effective kinetic parameters which take into account the effect of parametric Gaussian white noise. We comment on the validity of Langevin approximations for this problem. We propose a general method to deal with weak but otherwise arbitrary non-white parametric noise. Exact kinetic equations are derived for Gaussian non-white noise. (author)
A sliding point contact model for the finite element structures code EURDYN
International Nuclear Information System (INIS)
Smith, B.L.
1986-01-01
A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2017-07-01
We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i . In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length la. For la=0 the islands behave as perfect sinks while for la→∞ they behave as reflecting boundaries. For intermediate values of la, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and la. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and la). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of la and i =1 , 2, and 3.
Point, surface and volumetric heat sources in the thermal modelling of selective laser melting
Yang, Yabin; Ayas, Can
2017-10-01
Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.
Spatial dispersion modeling of 90Sr by point cumulative semivariogram at Keban Dam Lake, Turkey
International Nuclear Information System (INIS)
Kuelahci, Fatih; Sen, Zekai
2007-01-01
Spatial analysis of 90 Sr artificial radionuclide in consequence of global fallout and Chernobyl nuclear accident has been carried out by using the point cumulative semivariogram (PCSV) technique based on 40 surface water station measurements in Keban Dam Lake during March, April, and May 2006. This technique is a convenient tool in obtaining the regional variability features around each sampling point, which yields the structural effects also in the vicinity of the same point. It presents the regional effect of all the other sites within the study area on the site concerned. In order to see to change of 90 Sr, the five models are constituted. Additionally, it provides a measure of cumulative similarity of the regional variable, 90 Sr, around any measurement site and hence it is possible to draw regional similarity maps at any desired distance around each station. In this paper, such similarity maps are also drawn for a set of distances. 90 Sr activities in lake that distance approximately 4.5 km from stations show the maximum similarity
Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds
Sun, Shaohui
Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected
Point process models for localization and interdependence of punctate cellular structures.
Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F
2016-07-01
Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.
Modeling non-point source pollutants in the vadose zone: Back to the basics
Corwin, Dennis L.; Letey, John, Jr.; Carrillo, Marcia L. K.
More than ever before in the history of scientific investigation, modeling is viewed as a fundamental component of the scientific method because of the relatively recent development of the computer. No longer must the scientific investigator be confined to artificially isolated studies of individual processes that can lead to oversimplified and sometimes erroneous conceptions of larger phenomena. Computer models now enable scientists to attack problems related to open systems such as climatic change, and the assessment of environmental impacts, where the whole of the interactive processes are greater than the sum of their isolated components. Environmental assessment involves the determination of change of some constituent over time. This change can be measured in real time or predicted with a model. The advantage of prediction, like preventative medicine, is that it can be used to alter the occurrence of potentially detrimental conditions before they are manifest. The much greater efficiency of preventative, rather than remedial, efforts strongly justifies the need for an ability to accurately model environmental contaminants such as non-point source (NPS) pollutants. However, the environmental modeling advances that have accompanied computer technological development are a mixed blessing. Where once we had a plethora of discordant data without a holistic theory, now the pendulum has swung so that we suffer from a growing stockpile of models of which a significant number have never been confirmed or even attempts made to confirm them. Modeling has become an end in itself rather than a means because of limited research funding, the high cost of field studies, limitations in time and patience, difficulty in cooperative research and pressure to publish papers as quickly as possible. Modeling and experimentation should be ongoing processes that reciprocally enhance one another with sound, comprehensive experiments serving as the building blocks of models and models
AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA
Directory of Open Access Journals (Sweden)
H. Ma
2017-09-01
Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.
Automatic Extraction of Road Markings from Mobile Laser Scanning Data
Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.
2017-09-01
Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.
Point-Mass Model for Nano-Patterning Using Dip-Pen Nanolithography (DPN
Directory of Open Access Journals (Sweden)
Seok-Won Kang
2011-04-01
Full Text Available Micro-cantilevers are frequently used as scanning probes and sensors in micro-electromechanical systems (MEMS. Usually micro-cantilever based sensors operate by detecting changes in cantilever vibration modes (e.g., bending or torsional vibration frequency or surface stresses - when a target analyte is adsorbed on the surface. The catalyst for chemical reactions (i.e., for a specific analyte can be deposited on micro-cantilevers by using Dip-Pen Nanolithography (DPN technique. In this study, we simulate the vibration mode in nano-patterning processes by using a Point-Mass Model (or Lumped Parameter Model. The results from the simulations are used to derive the stability of writing and reading mode for a particular driving frequency during the DPN process. In addition, we analyze the sensitivity of the tip-sample interaction forces in fluid (ink solution by utilizing the Derjaguin-Muller-Toporov (DMT contact theory.
Photovoltaic System Modeling with Fuzzy Logic Based Maximum Power Point Tracking Algorithm
Directory of Open Access Journals (Sweden)
Hasan Mahamudul
2013-01-01
Full Text Available This paper represents a novel modeling technique of PV module with a fuzzy logic based MPPT algorithm and boost converter in Simulink environment. The prime contributions of this work are simplification of PV modeling technique and implementation of fuzzy based MPPT system to track maximum power efficiently. The main highlighted points of this paper are to demonstrate the precise control of the duty cycle with respect to various atmospheric conditions, illustration of PV characteristic curves, and operation analysis of the converter. The proposed system has been applied for three different PV modules SOLKAR 36 W, BP MSX 60 W, and KC85T 87 W. Finally the resultant data has been compared with the theoretical prediction and company specified value to ensure the validity of the system.
Investigation and modeling of the anomalous yield point phenomenon in pure tantalum
Energy Technology Data Exchange (ETDEWEB)
Colas, D. [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 5209 CNRS, Université de Bourgogne, 9 avenue Alain Savary, BP 17870, 21078 Dijon Cedex (France); CEA Valduc, 21120 Is-sur-Tille (France); Mines ParisTech, Centre des Matériaux, CNRS, UMR 7633, BP 87, 91003 Evry Cedex (France); Finot, E. [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 5209 CNRS, Université de Bourgogne, 9 avenue Alain Savary, BP 17870, 21078 Dijon Cedex (France); Flouriot, S. [CEA Valduc, 21120 Is-sur-Tille (France); Forest, S. [Mines ParisTech, Centre des Matériaux, CNRS, UMR 7633, BP 87, 91003 Evry Cedex (France); Mazière, M., E-mail: matthieu.maziere@mines-paristech.fr [Mines ParisTech, Centre des Matériaux, CNRS, UMR 7633, BP 87, 91003 Evry Cedex (France); Paris, T. [CEA Valduc, 21120 Is-sur-Tille (France)
2014-10-06
The monotonic and cyclic behavior of commercially pure tantalum has been investigated at room temperature, in order to capture and understand the occurrence of the anomalous yield point phenomenon. Interrupted tests have been performed, with strain reversals (tensile or compressive loading) after an aging period. The stress drop is attributed to the interactions between dislocations and solute atoms (oxygen) and its macroscopic occurrence is not systematically observed. InfraRed Thermography (IRT) measurements supported by Scanning Electron Microscopy (SEM) pictures of the polished gauge length of a specimen during an interrupted tensile test reveal the nucleation and propagation of a strain localization band. The KEMC (Kubin–Estrin–McCormick) phenomenological model accounting for strain aging has been identified for several loadings and strain rates at room temperature. Simulations on full specimen using the KEMC model do not show strain localization, because of the competition between viscosity and strain localization. However, a slight misalignment of the sample can promote strain localization.
DEFF Research Database (Denmark)
Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.
2018-01-01
The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...
Bohr model description of the critical point for the first order shape phase transition
Budaca, R.; Buganu, P.; Budaca, A. I.
2018-01-01
The critical point of the shape phase transition between spherical and axially deformed nuclei is described by a collective Bohr Hamiltonian with a sextic potential having simultaneous spherical and deformed minima of the same depth. The particular choice of the potential as well as the scaled and decoupled nature of the total Hamiltonian leads to a model with a single free parameter connected to the height of the barrier which separates the two minima. The solutions are found through the diagonalization in a basis of Bessel functions. The basis is optimized for each value of the free parameter by means of a boundary deformation which assures the convergence of the solutions for a fixed basis dimension. Analyzing the spectral properties of the model, as a function of the barrier height, revealed instances with shape coexisting features which are considered for detailed numerical applications.
Bohr model description of the critical point for the first order shape phase transition
Directory of Open Access Journals (Sweden)
R. Budaca
2018-01-01
Full Text Available The critical point of the shape phase transition between spherical and axially deformed nuclei is described by a collective Bohr Hamiltonian with a sextic potential having simultaneous spherical and deformed minima of the same depth. The particular choice of the potential as well as the scaled and decoupled nature of the total Hamiltonian leads to a model with a single free parameter connected to the height of the barrier which separates the two minima. The solutions are found through the diagonalization in a basis of Bessel functions. The basis is optimized for each value of the free parameter by means of a boundary deformation which assures the convergence of the solutions for a fixed basis dimension. Analyzing the spectral properties of the model, as a function of the barrier height, revealed instances with shape coexisting features which are considered for detailed numerical applications.
Mutual information as a two-point correlation function in stochastic lattice models
International Nuclear Information System (INIS)
Müller, Ulrich; Hinrichsen, Haye
2013-01-01
In statistical physics entropy is usually introduced as a global quantity which expresses the amount of information that would be needed to specify the microscopic configuration of a system. However, for lattice models with infinitely many possible configurations per lattice site it is also meaningful to introduce entropy as a local observable that describes the information content of a single lattice site. Likewise, the mutual information between two sites can be interpreted as a two-point correlation function which quantifies how much information a lattice site has about the state of another one and vice versa. Studying a particular growth model we demonstrate that the mutual information exhibits scaling properties that are consistent with the established phenomenological scaling picture. (paper)
EVALUATION MODEL FOR PAVEMENT SURFACE DISTRESS ON 3D POINT CLOUDS FROM MOBILE MAPPING SYSTEM
Directory of Open Access Journals (Sweden)
K. Aoki
2012-07-01
Full Text Available This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS. The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments’ specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.
International Nuclear Information System (INIS)
Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min
2014-01-01
A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model
Numerical simulation of a lattice polymer model at its integrable point
International Nuclear Information System (INIS)
Bedini, A; Owczarek, A L; Prellberg, T
2013-01-01
We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions x t = 1/12 and x h = −5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003). (paper)
Liquid-liquid critical point in a simple analytical model of water
Urbic, Tomaz
2016-10-01
A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.
3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models
Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.
2013-07-01
Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.
Is zero-point energy physical? A toy model for Casimir-like effect
International Nuclear Information System (INIS)
Nikolić, Hrvoje
2017-01-01
Zero-point energy is generally known to be unphysical. Casimir effect, however, is often presented as a counterexample, giving rise to a conceptual confusion. To resolve the confusion we study foundational aspects of Casimir effect at a qualitative level, but also at a quantitative level within a simple toy model with only 3 degrees of freedom. In particular, we point out that Casimir vacuum is not a state without photons, and not a ground state for a Hamiltonian that can describe Casimir force. Instead, Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation, and it is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces. - Highlights: • A toy model for Casimir-like effect with only 3 degrees of freedom is constructed. • Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation. • Casimir vacuum is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. • At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces.
International Nuclear Information System (INIS)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-01-01
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.
Directory of Open Access Journals (Sweden)
J. Yan
2016-06-01
Full Text Available This paper presents a global solution to building roof topological reconstruction from LiDAR point clouds. Starting with segmented roof planes from building LiDAR points, a BSP (binary space partitioning algorithm is used to partition the bounding box of the building into volumetric cells, whose geometric features and their topology are simultaneously determined. To resolve the inside/outside labelling problem of cells, a global energy function considering surface visibility and spatial regularization between adjacent cells is constructed and minimized via graph cuts. As a result, the cells are labelled as either inside or outside, where the planar surfaces between the inside and outside form the reconstructed building model. Two LiDAR data sets of Yangjiang (China and Wuhan University (China are used in the study. Experimental results show that the completeness of reconstructed roof planes is 87.5%. Comparing with existing data-driven approaches, the proposed approach is global. Roof faces and edges as well as their topology can be determined at one time via minimization of an energy function. Besides, this approach is robust to partial absence of roof planes and tends to reconstruct roof models with visibility-consistent surfaces.
FUNDAMENTAL ASPECTS OF EPISODIC ACCRETION CHEMISTRY EXPLORED WITH SINGLE-POINT MODELS
International Nuclear Information System (INIS)
Visser, Ruud; Bergin, Edwin A.
2012-01-01
We explore a set of single-point chemical models to study the fundamental chemical aspects of episodic accretion in low-mass embedded protostars. Our goal is twofold: (1) to understand how the repeated heating and cooling of the envelope affects the abundances of CO and related species; and (2) to identify chemical tracers that can be used as a novel probe of the timescales and other physical aspects of episodic accretion. We develop a set of single-point models that serve as a general prescription for how the chemical composition of a protostellar envelope is altered by episodic accretion. The main effect of each accretion burst is to drive CO ice off the grains in part of the envelope. The duration of the subsequent quiescent stage (before the next burst hits) is similar to or shorter than the freeze-out timescale of CO, allowing the chemical effects of a burst to linger long after the burst has ended. We predict that the resulting excess of gas-phase CO can be observed with single-dish or interferometer facilities as evidence of an accretion burst in the past 10 3 -10 4 yr.
3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds
Directory of Open Access Journals (Sweden)
Lucía Díaz-Vilariño
2015-02-01
Full Text Available 3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-07-01
An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.
Near-real-time regional troposphere models for the GNSS precise point positioning technique
International Nuclear Information System (INIS)
Hadas, T; Kaplon, J; Bosy, J; Sierny, J; Wilgan, K
2013-01-01
The GNSS precise point positioning (PPP) technique requires high quality product (orbits and clocks) application, since their error directly affects the quality of positioning. For real-time purposes it is possible to utilize ultra-rapid precise orbits and clocks which are disseminated through the Internet. In order to eliminate as many unknown parameters as possible, one may introduce external information on zenith troposphere delay (ZTD). It is desirable that the a priori model is accurate and reliable, especially for real-time application. One of the open problems in GNSS positioning is troposphere delay modelling on the basis of ground meteorological observations. Institute of Geodesy and Geoinformatics of Wroclaw University of Environmental and Life Sciences (IGG WUELS) has developed two independent regional troposphere models for the territory of Poland. The first one is estimated in near-real-time regime using GNSS data from a Polish ground-based augmentation system named ASG-EUPOS established by Polish Head Office of Geodesy and Cartography (GUGiK) in 2008. The second one is based on meteorological parameters (temperature, pressure and humidity) gathered from various meteorological networks operating over the area of Poland and surrounding countries. This paper describes the methodology of both model calculation and verification. It also presents results of applying various ZTD models into kinematic PPP in the post-processing mode using Bernese GPS Software. Positioning results were used to assess the quality of the developed models during changing weather conditions. Finally, the impact of model application to simulated real-time PPP on precision, accuracy and convergence time is discussed. (paper)
PREOPERATIVE ENDOSCOPIC MARKING OF UNPALPABLE COLONIC TUMORS
Directory of Open Access Journals (Sweden)
A. L. Goncharov
2013-01-01
Full Text Available The identification of small colon lesions is one of the major problems in laparoscopic colonic resection.Research objective: to develop a technique of visualization of small tumors of a colon by preoperative endoscopic marking of a tumor.Materials and methods. In one day prior to operation to the patient after bowel preparation the colonoscopy is carried out. In the planned point near tumor on antimesentery edge the submucous infiltration of marking solution (Micky Sharpz blue tattoo pigment, UK is made. The volume of entered solution of 1–3 ml. In only 5 months of use of a technique preoperative marking to 14 patients with small (the size of 1–3 cm malignant tumors of the left colon is performed.Results. The tattoo mark was well visualized by during operation at 13 of 14 patients. In all cases we recorded no complications. Time of operation with preoperative marking averaged 108 min, that is significantly less in comparison with average time of operation with an intra-operative colonoscopy – 155 min (р < 0.001.Conclusions. The first experience of preoperative endoscopic marking of non palpable small tumors of a colon is encouraging. Performance of a technique wasn't accompanied by complications and allowed to reduce significantly time of operation and to simplify conditions of performance of operation.
Privatize Candu (question mark)
International Nuclear Information System (INIS)
Kelly, Thomas.
1981-01-01
A report sponsored by a group of nuclear suppliers and the Royal Bank suggested that the Candu reactor system would sell better if it were owned by a private company. Licensing of a Candu reactor in the U.S.A. was also suggested. The author of this article agrees with these points, but disagrees with the suggestion that safeguards should be relaxed. He suggests that contracts should stipulate that instrumentation should be supplied as much as possible from Canadian sources
An Improved Statistical Point-source Foreground Model for the Epoch of Reionization
Energy Technology Data Exchange (ETDEWEB)
Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)
2017-08-10
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.
APPROACH TO SYNTHESIS OF PASSIVE INFRARED DETECTORS BASED ON QUASI-POINT MODEL OF QUALIFIED INTRUDER
Directory of Open Access Journals (Sweden)
I. V. Bilizhenko
2017-01-01
Full Text Available Subject of Research. The paper deals with synthesis of passive infra red (PIR detectors with enhanced detection capability of qualified intruder who uses different types of detection countermeasures: the choice of specific movement direction and disguise in infrared band. Methods. We propose an approach based on quasi-point model of qualified intruder. It includes: separation of model priority parameters, formation of partial detection patterns adapted to those parameters and multi channel signal processing. Main Results. Quasi-pointmodel of qualified intruder consisting of different fragments was suggested. Power density difference was used for model parameters estimation. Criteria were formulated for detection pattern parameters choice on the basis of model parameters. Pyroelectric sensor with nine sensitive elements was applied for increasing the signal information content. Multi-channel processing with multiple partial detection patterns was proposed optimized for detection of intruder's specific movement direction. Practical Relevance. Developed functional device diagram can be realized both by hardware and software and is applicable as one of detection channels for dual technology passive infrared and microwave detectors.
An Improved Statistical Point-source Foreground Model for the Epoch of Reionization
Murray, S. G.; Trott, C. M.; Jordan, C. H.
2017-08-01
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.
Insights into mortality patterns and causes of death through a process point of view model.
Anderson, James J; Li, Ting; Sharrow, David J
2017-02-01
Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.
Process-based coastal erosion modeling for Drew Point (North Slope, Alaska)
Ravens, Thomas M.; Jones, Benjamin M.; Zhang, Jinlin; Arp, Christopher D.; Schmutz, Joel A.
2012-01-01
A predictive, coastal erosion/shoreline change model has been developed for a small coastal segment near Drew Point, Beaufort Sea, Alaska. This coastal setting has experienced a dramatic increase in erosion since the early 2000’s. The bluffs at this site are 3-4 m tall and consist of ice-wedge bounded blocks of fine-grained sediments cemented by ice-rich permafrost and capped with a thin organic layer. The bluffs are typically fronted by a narrow (∼ 5 m wide) beach or none at all. During a storm surge, the sea contacts the base of the bluff and a niche is formed through thermal and mechanical erosion. The niche grows both vertically and laterally and eventually undermines the bluff, leading to block failure or collapse. The fallen block is then eroded both thermally and mechanically by waves and currents, which must occur before a new niche forming episode may begin. The erosion model explicitly accounts for and integrates a number of these processes including: (1) storm surge generation resulting from wind and atmospheric forcing, (2) erosional niche growth resulting from wave-induced turbulent heat transfer and sediment transport (using the Kobayashi niche erosion model), and (3) thermal and mechanical erosion of the fallen block. The model was calibrated with historic shoreline change data for one time period (1979-2002), and validated with a later time period (2002-2007).
Directory of Open Access Journals (Sweden)
Misganaw Abebe
2017-11-01
Full Text Available Springback in multi-point dieless forming (MDF is a common problem because of the small deformation and blank holder free boundary condition. Numerical simulations are widely used in sheet metal forming to predict the springback. However, the computational time in using the numerical tools is time costly to find the optimal process parameters value. This study proposes radial basis function (RBF to replace the numerical simulation model by using statistical analyses that are based on a design of experiment (DOE. Punch holding time, blank thickness, and curvature radius are chosen as effective process parameters for determining the springback. The Latin hypercube DOE method facilitates statistical analyses and the extraction of a prediction model in the experimental process parameter domain. Finite element (FE simulation model is conducted in the ABAQUS commercial software to generate the springback responses of the training and testing samples. The genetic algorithm is applied to find the optimal value for reducing and compensating the induced springback for the different blank thicknesses using the developed RBF prediction model. Finally, the RBF numerical result is verified by comparing with the FE simulation result of the optimal process parameters and both results show that the springback is almost negligible from the target shape.
A new stochastic model considering satellite clock interpolation errors in precise point positioning
Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong
2018-03-01
Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.
International Nuclear Information System (INIS)
Burnett, D.J.
1992-01-01
The SLAR (Spacer Location and Repositioning) program has developed the technology and tooling necessary to locate and reposition the fuel channel spacers that separate the pressure tube from the calandria tube in a CANDU reactor. The in-channel SLAR tool contains all the inspection probes, and is capable of moving spacers under remote control. The SLAR inspection computer system translates all eddy currents and ultrasonic signals from the in-channel tool into various graphic displays. The in-channel SLAR tool can be delivered and manipulated in a fuel channel by either a SLAR delivery machine or a SLARette delivery machine. The SLAR delivery machine consists of a modified fuelling machine, and is capable of operating under totally remote control in automatic or semi-automatic mode. The SLARette delivery machine is a smaller less automated version, which was designed to be quickly installed, operated, and removed from a limited number of fuel channels during regular annual maintenance outages. This paper describes the design and operation of the SLARette Mark 2 system. 5 figs
Non-parametric Bayesian inference for inhomogeneous Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael
is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...
A Novel Marking Reader for Progressive Addition Lenses Based on Gabor Holography.
Perucho, Beatriz; Picazo-Bueno, José Angel; Micó, Vicente
2016-05-01
Progressive addition lenses (PALs) are marked with permanent engraved marks (PEMs) at standardized locations. Permanent engraved marks are very useful through the manufacturing and mounting processes, act as locator marks to re-ink the removable marks, and contain useful information about the PAL. However, PEMs are often faint and weak, obscured by scratches, partially occluded, and difficult to recognize on tinted lenses or with antireflection or scratch-resistant coatings. The aim of this article is to present a new generation of portable marking reader based on an extremely simplified concept for visualization and identification of PEMs in PALs. Permanent engraved marks on different PALs are visualized using classical Gabor holography as underlying principle. Gabor holography allows phase sample visualization with adjustable magnification and can be implemented in either classical or digital versions. Here, visual Gabor holography is used to provide a magnified defocused image of the PEMs onto a translucent visualization screen where the PEM is clearly identified. Different types of PALs (conventional, personalized, old and scratched, sunglasses, etc.) have been tested to visualize PEMs with the proposed marking reader. The PEMs are visible in every case, and variable magnification factor can be achieved simply moving up and down the PAL in the instrument. In addition, a second illumination wavelength is also tested, showing the applicability of this novel marking reader for different illuminations. A new concept of marking reader ophthalmic instrument has been presented and validated in the laboratory. The configuration involves only a commercial-grade laser diode and a visualization screen for PEM identification. The instrument is portable, economic, and easy to use, and it can be used for identifying patient's current PAL model and for marking removable PALs again or finding test points regardless of the age of the PAL, its scratches, tints, or coatings.
Witnesses to the truth: Mark's point of view
African Journals Online (AJOL)
2016-08-12
Aug 12, 2016 ... mobile device to read online. Authors: .... true identity of who this 'Jesus' is. He is Israel's ... author links the good news of God's Son to Jewish ..... As Jesus' ministry continues, his fame begins to spread (Mk. 1:45; 2:2, 12, ...... MacDonald, D.K., 2013, 'The characterisation of a false disciple: Judas Iscariot in.
A global reference model of Curie-point depths based on EMAG2
Li, Chun-Feng; Lu, Yu; Wang, Jian
2017-03-01
In this paper, we use a robust inversion algorithm, which we have tested in many regional studies, to obtain the first global model of Curie-point depth (GCDM) from magnetic anomaly inversion based on fractal magnetization. Statistically, the oceanic Curie depth mean is smaller than the continental one, but continental Curie depths are almost bimodal, showing shallow Curie points in some old cratons. Oceanic Curie depths show modifications by hydrothermal circulations in young oceanic lithosphere and thermal perturbations in old oceanic lithosphere. Oceanic Curie depths also show strong dependence on the spreading rate along active spreading centers. Curie depths and heat flow are correlated, following optimal theoretical curves of average thermal conductivities K = ~2.0 W(m°C)-1 for the ocean and K = ~2.5 W(m°C)-1 for the continent. The calculated heat flow from Curie depths and large-interval gridding of measured heat flow all indicate that the global heat flow average is about 70.0 mW/m2, leading to a global heat loss ranging from ~34.6 to 36.6 TW.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Glue detection based on teaching points constraint and tracking model of pixel convolution
Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen
2018-01-01
On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.
Fast Outage Probability Simulation for FSO Links with a Generalized Pointing Error Model
Ben Issaid, Chaouki
2017-02-07
Over the past few years, free-space optical (FSO) communication has gained significant attention. In fact, FSO can provide cost-effective and unlicensed links, with high-bandwidth capacity and low error rate, making it an exciting alternative to traditional wireless radio-frequency communication systems. However, the system performance is affected not only by the presence of atmospheric turbulences, which occur due to random fluctuations in the air refractive index but also by the existence of pointing errors. Metrics, such as the outage probability which quantifies the probability that the instantaneous signal-to-noise ratio is smaller than a given threshold, can be used to analyze the performance of this system. In this work, we consider weak and strong turbulence regimes, and we study the outage probability of an FSO communication system under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results.
Reliable four-point flexion test and model for die-to-wafer direct bonding
Energy Technology Data Exchange (ETDEWEB)
Tabata, T., E-mail: toshiyuki.tabata@cea.fr; Sanchez, L.; Fournel, F.; Moriceau, H. [Univ. Grenoble Alpes, F-38000 Grenoble, France and CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)
2015-07-07
For many years, wafer-to-wafer (W2W) direct bonding has been very developed particularly in terms of bonding energy measurement and bonding mechanism comprehension. Nowadays, die-to-wafer (D2W) direct bonding has gained significant attention, for instance, in photonics and microelectro-mechanics, which supposes controlled and reliable fabrication processes. So, whatever the stuck materials may be, it is not obvious whether bonded D2W structures have the same bonding strength as bonded W2W ones, because of possible edge effects of dies. For that reason, it has been strongly required to develop a bonding energy measurement technique which is suitable for D2W structures. In this paper, both D2W- and W2W-type standard SiO{sub 2}-to-SiO{sub 2} direct bonding samples are fabricated from the same full-wafer bonding. Modifications of the four-point flexion test (4PT) technique and applications for measuring D2W direct bonding energies are reported. Thus, the comparison between the modified 4PT and the double-cantilever beam techniques is drawn, also considering possible impacts of the conditions of measures such as the water stress corrosion at the debonding interface and the friction error at the loading contact points. Finally, reliability of a modified technique and a new model established for measuring D2W direct bonding energies is demonstrated.
Establishing the long-term fuel management scheme using point reactivity model
International Nuclear Information System (INIS)
Park, Yong-Soo; Kim, Jae-Hak; Lee, Young-Ouk; Song, Jae-Woong; Zee, Sung-Kyun
1994-01-01
A new approach to establish the long-term fuel management scheme is presented in this paper. The point reactivity model is used to predict the core average reactivity. An attempt to calculate batchwise power fraction is introduced through the two-dimensional nodal power algorithm based on the modified one-group diffusion equation and the number of fuel assemblies on the core periphery. Suggested is an empirical formula to estimate the radial leakage reactivity with ripe core design experience reflected. This approach predicts the cycle lengths and the discharge burnups of individual fuel batches up to an equilibrium core when the proper input data such as batch enrichment, batch size, type and content of burnable poison and reloading strategies are given. Eight benchmark calculations demonstrate that the new approach used in this study is reasonably accurate and highly efficient for the purpose of scoping calculation when compared with design code predictions. (author)
Signals for the QCD phase transition and critical point in a Langevin dynamical model
International Nuclear Information System (INIS)
Herold, Christoph; Bleicher, Marcus; Yan, Yu-Peng
2013-01-01
The search for the critical point is one of the central issues that will be investigated in the upcoming FAIR project. For a profound theoretical understanding of the expected signals we go beyond thermodynamic studies and present a fully dynamical model for the chiral and deconfinement phase transition in heavy ion collisions. The corresponding order parameters are propagated by Langevin equations of motions on a thermal background provided by a fluid dynamically expanding plasma of quarks. By that we are able to describe nonequilibrium effects occurring during the rapid expansion of a hot fireball. For an evolution through the phase transition the formation of a supercooled phase and its subsequent decay crucially influence the trajectories in the phase diagram and lead to a significant reheating of the quark medium at highest baryon densities. Furthermore, we find inhomogeneous structures with high density domains along the first order transition line within single events.
Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.
2018-01-01
A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.
Logarithmic two-point correlation functions from a z=2 Lifshitz model
International Nuclear Information System (INIS)
Zingg, T.
2014-01-01
The Einstein-Proca action is known to have asymptotically locally Lifshitz spacetimes as classical solutions. For dynamical exponent z=2, two-point correlation functions for fluctuations around such a geometry are derived analytically. It is found that the retarded correlators are stable in the sense that all quasinormal modes are situated in the lower half-plane of complex frequencies. Correlators in the longitudinal channel exhibit features that are reminiscent of a structure usually obtained in field theories that are logarithmic, i.e. contain an indecomposable but non-diagonalizable highest weight representation. This provides further evidence for conjecturing the model at hand as a candidate for a gravity dual of a logarithmic field theory with anisotropic scaling symmetry
Ab-initio modelling of thermodynamics and kinetics of point defects in indium oxide
International Nuclear Information System (INIS)
Agoston, Peter; Klein, Andreas; Albe, Karsten; Erhart, Paul
2008-01-01
The electrical and optical properties of indium oxide films strongly vary with the processing parameters. Especially the oxygen partial pressure and temperature determine properties like electrical conductivity, composition and transparency. Since this material owes its remarkable properties like the intrinsic n-type conductivity to its defect chemistry, it is important to understand both, the equilibrium defect thermodynamics and kinetics of the intrinsic point defects. In this contribution we present a defect model based on DFT total energy calculations using the GGA+U method. Further, the nudged elastic band method is employed in order to obtain a set of migration barriers for each defect species. Due to the complicated crystal structure of indium oxide a Kinetic Monte-Carlo algorithm was implemented, which allows to determine diffusion coefficients. The bulk tracer diffusion constant is predicted as a function of oxygen partial pressure, Fermi level and temperature for the pure material
Point Defects in 3D and 1D Nanomaterials: The Model Case of Titanium Dioxide
International Nuclear Information System (INIS)
Knauth, Philippe
2010-01-01
Titanium dioxide is one of the most important oxides for applications in energy and environment, such as solar cells, photocatalysis, lithium-ion batteries. In recent years, new forms of titanium dioxide with unusual structure and/or morphology have been developed, including nanocrystals, nanotubes or nanowires. We have studied in detail the point defect chemistry in nanocrystalline TiO 2 powders and ceramics. There can be a change from predominant Frenkel to Schottky disorder, depending on the experimental conditions, e.g. temperature and oxygen partial pressure. We have also studied the local environment of various dopants with similar ion radius, but different ion charge (Zn 2+ , Y 3+ , Sn 4+ , Zr 4+ , Nb 5+ ) in TiO 2 nanopowders and nanoceramics by Extended X-Ray Absorption Fine Structure (EXAFS) Spectroscopy. Interfacial segregation of acceptors was demonstrated, but donors and isovalent ions do not segregate. An electrostatic 'space charge' segregation model is applied, which explains well the observed phenomena.
Effects of positive potential in the catastrophe theory study of the point model for bumpy tori
Energy Technology Data Exchange (ETDEWEB)
Punjabi, A; Vahala, G [College of William and Mary, Williamsburg, VA (USA). Dept. of Physics
1985-02-01
With positive ambipolar potential, ion non-resonant neoclassical transport leads to increased particle confinement times. In certain regimes of filling pressure, microwave powers (ECRH and ICRH) and positive potential, new folds can now emerge from previously degenerate equilibrium surfaces allowing for distinct C, T, and M modes of operation. A comparison in the equilibrium fold structure is also made between (i) equal particle and energy confinement times, and (ii) particle confinement times enhanced over the energy confinement time. The nonlinear time evolution of these point model equations is considered and confirms the delay convention occurrences at the fold edges. It is clearly seen that the time-asymptotic equilibrium state is very sensitive, not only to the values of the control parameters (neutral density, ambipolar electrostatic potential, electron and ion cyclotron power densities) but also to the initial conditions on the plasma density, and electron and ion temperatures.
Mass effects in three-point chronological current correlators in n-dimensional multifermion models
International Nuclear Information System (INIS)
Kucheryavyj, V.I.
1991-01-01
Three-types of quantities associated with three-point chronological fermion-current correlators having arbitrary Lorentz and internal structure are calculated in the n-dimensional multifermion models with different masses. The analysis of vector and axial-vector Ward identities for regular (finite) and dimensionally regularized values of these quantities is carried out. Quantum corrections to the canonical Ward identities are obtained. These corrections are generally homogenious functions of zeroth order in masses and under some definite conditions they are reduced to known axial-vector anomalies. The structure and properties of quantum corrections to AVV and AAA correlators in the four-dimension space-time are investigated in detail
On the realism of the re-engineered simple point charge water model
International Nuclear Information System (INIS)
Chialvo, A.A.
1996-01-01
The realism of the recently proposed high-temperature reparameterization of the simple point charge (SPC) water model [C. D. Berweger, W. F. van Gunsteren, and F. Mueller-Plathe, Chem. Phys. Lett. 232, 429 (1995)] is tested by comparing the simulated microstructure and dielectric properties to the available experimental data. The test indicates that the new parameterization fails dramatically to describe the microstructural and dielectric properties of water at high temperature; it predicts rather strong short-range site endash site pair correlations, even stronger than those for water at ambient conditions, and a threefold smaller dielectric constant. Moreover, the resulting microstructure suggests that the high-temperature force-field parameters would predict a twofold higher critical density. The failure of the high-temperature parameterization is analyzed and some suggestions on alternative choices of the target properties for the weak-coupling are discussed. copyright 1996 American Institute of Physics
An Extension of the Miller Equilibrium Model into the X-Point Region
Hill, M. D.; King, R. W.; Stacey, W. M.
2017-10-01
The Miller equilibrium model has been extended to better model the flux surfaces in the outer region of the plasma and scrape-off layer, including the poloidally non-uniform flux surface expansion that occurs in the X-point region(s) of diverted tokamaks. Equations for elongation and triangularity are modified to include a poloidally varying component and grad-r, which is used in the calculation of the poloidal magnetic field, is rederived. Initial results suggest that strong quantitative agreement with experimental flux surface reconstructions and strong qualitative agreement with poloidal magnetic fields can be obtained using this model. Applications are discussed. A major new application is the automatic generation of the computation mesh in the plasma edge, scrape-off layer, plenum and divertor regions for use in the GTNEUT neutral particle transport code, enabling this powerful analysis code to be routinely run in experimental analyses. Work supported by US DOE under DE-FC02-04ER54698.
Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems
Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka
2018-06-01
One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.
Directory of Open Access Journals (Sweden)
Yin Yanshu
2017-12-01
Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
Economic-environmental modeling of point source pollution in Jefferson County, Alabama, USA.
Kebede, Ellene; Schreiner, Dean F; Huluka, Gobena
2002-05-01
This paper uses an integrated economic-environmental model to assess the point source pollution from major industries in Jefferson County, Northern Alabama. Industrial expansion generates employment, income, and tax revenue for the public sector; however, it is also often associated with the discharge of chemical pollutants. Jefferson County is one of the largest industrial counties in Alabama that experienced smog warnings and ambient ozone concentration, 1996-1999. Past studies of chemical discharge from industries have used models to assess the pollution impact of individual plants. This study, however, uses an extended Input-Output (I-O) economic model with pollution emission coefficients to assess direct and indirect pollutant emission for several major industries in Jefferson County. The major findings of the study are: (a) the principal emission by the selected industries are volatile organic compounds (VOC) and these contribute to the ambient ozone concentration; (b) the direct and indirect emissions are significantly higher than the direct emission by some industries, indicating that an isolated analysis will underestimate the emission by an industry; (c) while low emission coefficient industries may suggest industry choice they may also emit the most hazardous chemicals. This study is limited by the assumptions made, and the data availability, however it provides a useful analytical tool for direct and cumulative emission estimation and generates insights on the complexity in choice of industries.
Different faces of chaos in FRW models with scalar fields-geometrical point of view
International Nuclear Information System (INIS)
Hrycyna, Orest; Szydlowski, Marek
2006-01-01
FRW cosmologies with conformally coupled scalar fields are investigated in a geometrical way by the means of geodesics of the Jacobi metric. In this model of dynamics, trajectories in the configuration space are represented by geodesics. Because of the singular nature of the Jacobi metric on the boundary set -bar D of the domain of admissible motion, the geodesics change the cone sectors several times (or an infinite number of times) in the neighborhood of the singular set -bar D. We show that this singular set contains interesting information about the dynamical complexity of the model. Firstly, this set can be used as a Poincare surface for construction of Poincare sections, and the trajectories then have the recurrence property. We also investigate the distribution of the intersection points. Secondly, the full classification of periodic orbits in the configuration space is performed and existence of UPO is demonstrated. Our general conclusion is that, although the presented model leads to several complications, like divergence of curvature invariants as a measure of sensitive dependence on initial conditions, some global results can be obtained and some additional physical insight is gained from using the conformal Jacobi metric. We also study the complex behavior of trajectories in terms of symbolic dynamics
Plasmon point spread functions: How do we model plasmon-mediated emission processes?
Willets, Katherine A.
2014-02-01
A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole point spread functions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.
Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.
2012-12-01
Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.
Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming
2017-12-01
Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and
Energy Technology Data Exchange (ETDEWEB)
Massad, Raia Silvia [Institut National de la Recherche Agronomique (INRA), Environnement et Grandes Cultures, 78850 Thiverval-Grignon (France)], E-mail: massad@grignon.inra.fr; Loubet, Benjamin; Tuzet, Andree; Cellier, Pierre [Institut National de la Recherche Agronomique (INRA), Environnement et Grandes Cultures, 78850 Thiverval-Grignon (France)
2008-08-15
The ammonia stomatal compensation point of plants is determined by leaf temperature, ammonium concentration ([NH{sub 4}{sup +}]{sub apo}) and pH of the apoplastic solution. The later two depend on the adjacent cells metabolism and on leaf inputs and outputs through the xylem and phloem. Until now only empirical models have been designed to model the ammonia stomatal compensation point, except the model of Riedo et al. (2002. Coupling soil-plant-atmosphere exchange of ammonia with ecosystem functioning in grasslands. Ecological Modelling 158, 83-110), which represents the exchanges between the plant's nitrogen pools. The first step to model the ammonia stomatal compensation point is to adequately model [NH{sub 4}{sup +}]{sub apo}. This [NH{sub 4}{sup +}]{sub apo} has been studied experimentally, but there are currently no process-based quantitative models describing its relation to plant metabolism and environmental conditions. This study summarizes the processes involved in determining the ammonia stomatal compensation point at the leaf scale and qualitatively evaluates the ability of existing whole plant N and C models to include a model for [NH{sub 4}{sup +}]{sub apo}. - A model for ammonia stomatal compensation point at the leaf level scale was developed.
International Nuclear Information System (INIS)
Massad, Raia Silvia; Loubet, Benjamin; Tuzet, Andree; Cellier, Pierre
2008-01-01
The ammonia stomatal compensation point of plants is determined by leaf temperature, ammonium concentration ([NH 4 + ] apo ) and pH of the apoplastic solution. The later two depend on the adjacent cells metabolism and on leaf inputs and outputs through the xylem and phloem. Until now only empirical models have been designed to model the ammonia stomatal compensation point, except the model of Riedo et al. (2002. Coupling soil-plant-atmosphere exchange of ammonia with ecosystem functioning in grasslands. Ecological Modelling 158, 83-110), which represents the exchanges between the plant's nitrogen pools. The first step to model the ammonia stomatal compensation point is to adequately model [NH 4 + ] apo . This [NH 4 + ] apo has been studied experimentally, but there are currently no process-based quantitative models describing its relation to plant metabolism and environmental conditions. This study summarizes the processes involved in determining the ammonia stomatal compensation point at the leaf scale and qualitatively evaluates the ability of existing whole plant N and C models to include a model for [NH 4 + ] apo . - A model for ammonia stomatal compensation point at the leaf level scale was developed
DEFF Research Database (Denmark)
Odgaard, Anders; Madsen, Frank; Kristensen, Per Wagner
2018-01-01
BACKGROUND: Controversy exists over the surgical treatment for severe patellofemoral osteoarthritis. We therefore wished to compare the outcome of patellofemoral arthroplasty (PFA) with TKA in a blinded randomized controlled trial. QUESTIONS/PURPOSES: In the first 2 years after surgery: (1) Does...... the overall gain in quality of life differ between the implants based on the area under the curve of patient-reported outcomes (PROs) versus time? (2) Do patients obtain a better quality of life at specific points in time after PFA than after TKA? (3) Do patients get a better range of movement after PFA than...... after TKA? (4) Does PFA result in more complications than TKA? METHODS: Patients were eligible if they had debilitating symptoms and isolated patellofemoral disease. One hundred patients were included from 2007 to 2014 and were randomized to PFA or TKA (blinded for the first year; blinded to patient...
Regulatory mark; Marco regulatorio
Energy Technology Data Exchange (ETDEWEB)
NONE
2009-10-15
This chapter is based on a work performed in distinct phases. The first phase consisted in of the analysis regulatory legislation existent in Brazil for the sugar-alcohol sector since the beginning of the X X century. This analysis allowed the identification of non existent points and legal devices related to the studied aspects, and that were considered as problematic for the sector expansion. In the second phase, related treaties and international agreements was studied and possible obstacles for the brazilian bio ethanol exportation for the international market. Initiatives were examined at European Union, United States of America, Caribbean and countries of the sub-Saharan Africa. In this phase, policies were identified related to the incentives and adoption of use of bio fuels added to the gasoline in countries or group of countries considered as key for the consolidation of bio ethanol as a world commodity.
Energy Technology Data Exchange (ETDEWEB)
Pradhan, Santosh K., E-mail: santosh@aerb.gov.in [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai 400094 (India); Obaidurrahman, K. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai 400094 (India); Iyer, Kannan N. [Department of Mechanical Engineering, IIT Bombay, Mumbai 400076 (India); Gaikwad, Avinash J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai 400094 (India)
2016-04-15
Highlights: • A multi-point kinetics model is developed for RELAP5 system thermal hydraulics code. • Model is validated against extensive 3D kinetics code. • RELAP5 multi-point kinetics formulation is used to investigate critical break for LOCA in PHWR. - Abstract: Point kinetics approach in system code RELAP5 limits its use for many of the reactivity induced transients, which involve asymmetric core behaviour. Development of fully coupled 3D core kinetics code with system thermal-hydraulics is the ultimate requirement in this regard; however coupling and validation of 3D kinetics module with system code is cumbersome and it also requires access to source code. An intermediate approach with multi-point kinetics is appropriate and relatively easy to implement for analysis of several asymmetric transients for large cores. Multi-point kinetics formulation is based on dividing the entire core into several regions and solving ODEs describing kinetics in each region. These regions are interconnected by spatial coupling coefficients which are estimated from diffusion theory approximation. This model offers an advantage that associated ordinary differential equations (ODEs) governing multi-point kinetics formulation can be solved using numerical methods to the desired level of accuracy and thus allows formulation based on user defined control variables, i.e., without disturbing the source code and hence also avoiding associated coupling issues. Euler's method has been used in the present formulation to solve several coupled ODEs internally at each time step. The results have been verified against inbuilt point-kinetics models of RELAP5 and validated against 3D kinetics code TRIKIN. The model was used to identify the critical break in RIH of a typical large PHWR core. The neutronic asymmetry produced in the core due to the system induced transient was effectively handled by the multi-point kinetics model overcoming the limitation of in-built point kinetics model
International Nuclear Information System (INIS)
Narula, Kapil K.; Gosain, A.K.
2013-01-01
The mountainous Himalayan watersheds are important hydrologic systems responsible for much of the water supply in the Indian sub-continent. These watersheds are increasingly facing anthropogenic and climate-related pressures that impact spatial and temporal distribution of water availability. This study evaluates temporal and spatial distribution of water availability including groundwater recharge and quality (non-point nitrate loadings) for a Himalayan watershed, namely, the Upper Yamuna watershed (part of the Ganga River basin). The watershed has an area of 11 600 km 2 with elevation ranging from 6300 to 600 m above mean sea level. Soil and Water Assessment Tool (SWAT), a physically-based, time-continuous model, has been used to simulate the land phase of the hydrological cycle, to obtain streamflows, groundwater recharge, and nitrate (NO 3 ) load distributions in various components of runoff. The hydrological SWAT model is integrated with the MODular finite difference groundwater FLOW model (MODFLOW), and Modular 3-Dimensional Multi-Species Transport model (MT3DMS), to obtain groundwater flow and NO 3 transport. Validation of various modules of this integrated model has been done for sub-basins of the Upper Yamuna watershed. Results on surface runoff and groundwater levels obtained as outputs from simulation show a good comparison with the observed streamflows and groundwater levels (Nash–Sutcliffe and R 2 correlations greater than + 0.7). Nitrate loading obtained after nitrification, denitrification, and NO 3 removal from unsaturated and shallow aquifer zones is combined with groundwater recharge. Results for nitrate modeling in groundwater aquifers are compared with observed NO 3 concentration and are found to be in good agreement. The study further evaluates the sensitivity of water availability to climate change. Simulations have been made with the weather inputs of climate change scenarios of A2, B2, and A1B for end of the century. Water yield estimates
Microscopic saw mark analysis: an empirical approach.
Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles
2015-01-01
Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...
A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns
Dao, Ngocanh; Genton, Marc G.
2014-01-01
Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte
Energy Technology Data Exchange (ETDEWEB)
Punjabi, A; Vahala, G [College of William and Mary, Williamsburg, VA (USA). Dept. of Physics
1983-12-01
The point model for the toroidal core plasma in the ELMO Bumpy Torus (with neoclassical non-resonant electrons) is examined in the light of catastrophe theory. Even though the point model equations do not constitute a gradient dynamic system, the equilibrium surfaces are similar to those of the canonical cusp catastrophe. The point model is then extended to incorporate ion cyclotron resonance heating. A detailed parametric study of the equilibria is presented. Further, the nonlinear time evolution of these equilibria is studied, and it is observed that the point model obeys the delay convention (and hence hysteresis) and shows catastrophes at the fold edges of the equilibrium surfaces. Tentative applications are made to experimental results.
Abrahamson, Matthew J.; Oaida, Bogdan; Erkmen, Baris
2013-01-01
This paper will discuss the OPALS pointing strategy, focusing on incorporation of ISS trajectory and attitude models to build pointing predictions. Methods to extrapolate an ISS prediction based on past data will be discussed and will be compared to periodically published ISS predictions and Two-Line Element (TLE) predictions. The prediction performance will also be measured against GPS states available in telemetry. The performance of the pointing products will be compared to the allocated values in the OPALS pointing budget to assess compliance with requirements.
Mark Kostabi soovib muuta inimesi õnnelikumaks / Kalev Mark Kostabi
Kostabi, Kalev Mark, 1960-
2008-01-01
Kalev Mark Kostabi oma sisekujunduslikest eelistustest, ameeriklaste ja itaallaste kodude sisekujunduse erinevustest, kunstist kui ruumikujunduse ühest osast, oma New Yorgi ja Rooma korterite kujundusest
Directory of Open Access Journals (Sweden)
Carey Mather
2017-05-01
Full Text Available Limited adoption of mobile technology for informal learning and continuing professional development within Australian healthcare environments has been explained primarily as an issue of insufficient digital and ehealth literacy of healthcare professionals. This study explores nurse supervisors’ use of mobile technology for informal learning and continuing professional development both for their own professional practice, and in their role in modelling digital knowledge transfer, by facilitating the learning and teaching of nursing students in the workplace. A convenience sample of 27 nurse supervisors involved with guiding and supporting undergraduate nurses participated in one of six focus groups held in two states of Australia. Expanding knowledge emerged as the key theme of importance to this group of clinicians. Although nurse supervisors regularly browsed Internet sources for learning and teaching purposes, a mixed understanding of the mobile learning activities that could be included as informal learning or part of formal continuing professional development was detected. Participants need educational preparation and access to mobile learning opportunities to improve and maintain their digital and ehealth literacy to appropriately model digital professionalism with students. Implementation of mobile learning at point of care to enable digital knowledge transfer, augment informal learning for students and patients, and support continuing professional development opportunities is necessary. Embedding digital and ehealth literacy within nursing curricula will promote mobile learning as a legitimate nursing function and advance nursing practice.
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
Quench dynamics near a quantum critical point: Application to the sine-Gordon model
International Nuclear Information System (INIS)
De Grandi, C.; Polkovnikov, A.; Gritsev, V.
2010-01-01
We discuss the quench dynamics near a quantum critical point focusing on the sine-Gordon model as a primary example. We suggest a unified approach to sudden and slow quenches, where the tuning parameter λ(t) changes in time as λ(t)∼υt r , based on the adiabatic expansion of the excitation probability in powers of υ. We show that the universal scaling of the excitation probability can be understood through the singularity of the generalized adiabatic susceptibility χ 2r+2 (λ), which for sudden quenches (r=0) reduces to the fidelity susceptibility. In turn this class of susceptibilities is expressed through the moments of the connected correlation function of the quench operator. We analyze the excitations created after a sudden quench of the cosine potential using a combined approach of form-factors expansion and conformal perturbation theory for the low-energy and high-energy sector, respectively. We find the general scaling laws for the probability of exciting the system, the density of excited quasiparticles, the entropy and the heat generated after the quench. In the two limits where the sine-Gordon model maps to hard-core bosons and free massive fermions we provide the exact solutions for the quench dynamics and discuss the finite temperature generalizations.
The case for an internal dynamics model versus equilibrium point control in human movement.
Hinder, Mark R; Milner, Theodore E
2003-06-15
The equilibrium point hypothesis (EPH) was conceived as a means whereby the central nervous system could control limb movements by a relatively simple shift in equilibrium position without the need to explicitly compensate for task dynamics. Many recent studies have questioned this view with results that suggest the formation of an internal dynamics model of the specific task. However, supporters of the EPH have argued that these results are not incompatible with the EPH and that there is no reason to abandon it. In this study, we have tested one of the fundamental predictions of the EPH, namely, equifinality. Subjects learned to perform goal-directed wrist flexion movements while a motor provided assistance in proportion to the instantaneous velocity. It was found that the subjects stopped short of the target on the trials where the magnitude of the assistance was randomly decreased, compared to the preceding control trials (P = 0.003), i.e. equifinality was not achieved. This is contrary to the EPH, which predicts that final position should not be affected by external loads that depend purely on velocity. However, such effects are entirely consistent with predictions based on the formation of an internal dynamics model.
Energy Technology Data Exchange (ETDEWEB)
Erhart, P.
2006-07-01
The present dissertation deals with the modeling of zinc oxide on the atomic scale employing both quantum mechanical as well as atomistic methods. The first part describes quantum mechanical calculations based on density functional theory of intrinsic point defects in ZnO. To begin with, the geometric and electronic structure of vacancies and oxygen interstitials is explored. In equilibrium oxygen interstitials are found to adopt dumbbell and split interstitial configurations in positive and negative charge states, respectively. Semi-empirical self-interaction corrections allow to improve the agreement between the experimental and the calculated band structure significantly; errors due to the limited size of the supercells can be corrected by employing finite-size scaling. The effect of both band structure corrections and finite-size scaling on defect formation enthalpies and transition levels is explored. Finally, transition paths and barriers for the migration of zinc as well as oxygen vacancies and interstitials are determined. The results allow to interpret diffusion experiments and provide a consistent basis for developing models for device simulation. In the second part an interatomic potential for zinc oxide is derived. To this end, the Pontifix computer code is developed which allows to fit analytic bond-order potentials. The code is subsequently employed to obtain interatomic potentials for Zn-O, Zn-Zn, and O-O interactions. To demonstrate the applicability of the potentials, simulations on defect production by ion irradiation are carried out. (orig.)
Point spread function modeling and image restoration for cone-beam CT
International Nuclear Information System (INIS)
Zhang Hua; Shi Yikai; Huang Kuidong; Xu Zhe
2015-01-01
X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)
CPN-1 models with a θ term and fixed point action
International Nuclear Information System (INIS)
Burkhalter, Rudolf; Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi
2001-01-01
The topological charge distribution P(Q) is calculated for lattice CP N-1 models. In order to suppress lattice cutoff effects, we employ a fixed point (FP) action. Through transformation of P(Q), we calculate the free energy F(θ) as a function of the θ parameter. For N=4, scaling behavior is observed for P(Q) and F(θ), as well as the correlation lengths ξ(Q). For N=2, however, scaling behavior is not observed, as expected. For comparison, we also make a calculation for the CP 3 model with a standard action. We furthermore pay special attention to the behavior of P(Q) in order to investigate the dynamics of instantons. For this purpose, we carefully consider the behavior of γ eff , which is an effective power of P(Q) (∼exp (-CQ γeff )), and reflects the local behavior of P(Q) as a function of Q. We study γ eff for two cases, the dilute gas approximation based on the Poisson distribution of instantons and the Debye-Hueckel approximation of instanton quarks. In both cases, we find behavior similar to that observed in numerical simulations. (author)
Directory of Open Access Journals (Sweden)
D. D. Burkaltseva
2017-01-01
Full Text Available Abstract Purpose: the main goal of the article is to build a conceptual model for the organization of eﬀective functioning of the points of economic and innovative growth of the region in modern conditions, taking into account regional and municipal limitations of internal and external nature, with the aim of ensuring economic security, eﬀective interaction of subjects of the "business-power" system Taking into account the inﬂuence of institutional factors. Methods: the methodological basis of research in the article is the dialectical method of scientiﬁc cognition, the systemic and institutional approach to studying and building an organization for the eﬀective functioning of the regional economy in order to ensure its economic security from internal and external threats. Results: the existing mechanism of interaction "business and power" is considered. The ﬁnancial stability of economic entities of the Republic of Crimea is determined. The ﬁnancial independence of the regional budget of the Republic of Crimea has been determined. The dynamics of ﬁnancing of the Federal Target Program "Social and Economic Development of the Republic of Crimea and Sevastopol until 2020" has been revealed. The regional and municipal restrictions of internal and external nature, which constitute a threat to social and economic development, are indicated. Points of economic and innovative growth at the present stage and their advantages and stages of technical organization of their implementation have been determined. A conceptual model of building eﬀective interaction between subjects of the "business-power" system is proposed taking into account the inﬂuence of institutional factors. The conceptual model of organization of eﬀective functioning of points of economic and innovative growth of the region, as a territorial socio-economic system, under modern conditions is constructed. Conclusions and Relevance: we propose to deﬁne four
AUGEREAU, V; DABLANC, L
2007-01-01
In this paper, we present an analysis of recent collection point/lockerbank experiments in Europe, including the history of some of the most notable experiments. Two 'models' are currently quite successful (Kiala relay points in France and Packstation locker banks in Germany), although they are quite different. As a first interpretation of these results, we propose that these two models be considered as complementary to one another.
Narula, Kapil K; Gosain, A K
2013-12-01
The mountainous Himalayan watersheds are important hydrologic systems responsible for much of the water supply in the Indian sub-continent. These watersheds are increasingly facing anthropogenic and climate-related pressures that impact spatial and temporal distribution of water availability. This study evaluates temporal and spatial distribution of water availability including groundwater recharge and quality (non-point nitrate loadings) for a Himalayan watershed, namely, the Upper Yamuna watershed (part of the Ganga River basin). The watershed has an area of 11,600 km(2) with elevation ranging from 6300 to 600 m above mean sea level. Soil and Water Assessment Tool (SWAT), a physically-based, time-continuous model, has been used to simulate the land phase of the hydrological cycle, to obtain streamflows, groundwater recharge, and nitrate (NO3) load distributions in various components of runoff. The hydrological SWAT model is integrated with the MODular finite difference groundwater FLOW model (MODFLOW), and Modular 3-Dimensional Multi-Species Transport model (MT3DMS), to obtain groundwater flow and NO3 transport. Validation of various modules of this integrated model has been done for sub-basins of the Upper Yamuna watershed. Results on surface runoff and groundwater levels obtained as outputs from simulation show a good comparison with the observed streamflows and groundwater levels (Nash-Sutcliffe and R(2) correlations greater than +0.7). Nitrate loading obtained after nitrification, denitrification, and NO3 removal from unsaturated and shallow aquifer zones is combined with groundwater recharge. Results for nitrate modeling in groundwater aquifers are compared with observed NO3 concentration and are found to be in good agreement. The study further evaluates the sensitivity of water availability to climate change. Simulations have been made with the weather inputs of climate change scenarios of A2, B2, and A1B for end of the century. Water yield estimates under
Directory of Open Access Journals (Sweden)
Giuseppe Chesi
2012-01-01
Full Text Available IntroductionThe type of patients being treated in our hospitals has changed significantly. Today's patients are much older with more complicated, polypathological problems. As a result, hospital organization and management structures must also change, particularly in Internal Medicine. A widely discussed approach, organization according to “intensity of treatment,” could be an appropriate solution from an organizational viewpoint that would also satisfy these new demands.Materials and methodsWith the aid of a questionnaire sent to internists working in the hospitals of Italy's Emilia-Romagna region and the review of the relevant medical literature, we defined structural, organizational, technological, managerial, and staffing characteristics to better determine and classify this model. We analyzed questionnaire responses of 31 internists heading operative units in their hospitals, a relatively homogeneous subgroup with experience in organizing and managing healthcare as well as its clinical aspects.ResultsAnalysis of these questionnaires revealed important points concerning the model: 1 an accurate identification of the medical care on which to base the model; 2 a well-defined strategy for differentiated allocation of staff to structural and technological areas depending on the level of medical care provided in the area; 3 an accurate definition of the types and features of patients targeted by each level of medical care; 4 an early exchange (starting from the patient's arrival in the Emergency Department of information and medical knowledge among Emergency Department physicians and those present during the initial stages of hospitalization; 5 a precise definition of responsibilities in the different areas, operative and collaborative stages among different physicians and medical staff, the different disciplines involved in the process.ConclusionsAmong the physicians responsible for managing complex areas of Internal Medicine in Emilia
Characterization of the TIP4P-Ew water model: vapor pressure and boiling point.
Horn, Hans W; Swope, William C; Pitera, Jed W
2005-11-15
The liquid-vapor-phase equilibrium properties of the previously developed TIP4P-Ew water model have been studied using thermodynamic integration free-energy simulation techniques in the temperature range of 274-400 K. We stress that free-energy results from simulations need to be corrected in order to be compared to the experiment. This is due to the fact that the thermodynamic end states accessible through simulations correspond to fictitious substances (classical rigid liquids and classical rigid ideal gases) while experiments operate on real substances (liquids and real gases, with quantum effects). After applying analytical corrections the vapor pressure curve obtained from simulated free-energy changes is in excellent agreement with the experimental vapor pressure curve. The boiling point of TIP4P-Ew water under ambient pressure is found to be at 370.3+/-1.9 K, about 7 K higher than the boiling point of TIP4P water (363.7+/-5.1 K; from simulations that employ finite range treatment of electrostatic and Lennard-Jones interactions). This is in contrast to the approximately +15 K by which the temperature of the density maximum and the melting temperature of TIP4P-Ew are shifted relative to TIP4P, indicating that the temperature range over which the liquid phase of TIP4P-Ew is stable is narrower than that of TIP4P and resembles more that of real water. The quality of the vapor pressure results highlights the success of TIP4P-Ew in describing the energetic and entropic aspects of intermolecular interactions in liquid water.
Modelling the transport of solid contaminants originated from a point source
Salgueiro, Dora V.; Conde, Daniel A. S.; Franca, Mário J.; Schleiss, Anton J.; Ferreira, Rui M. L.
2017-04-01
The solid phases of natural flows can comprise an important repository for contaminants in aquatic ecosystems and can propagate as turbidity currents generating a stratified environment. Contaminants can be desorbed under specific environmental conditions becoming re-suspended, with a potential impact on the aquatic biota. Forecasting the distribution of the contaminated turbidity current is thus crucial for a complete assessment of environmental exposure. In this work we validate the ability of the model STAV-2D, developed at CERIS (IST), to simulate stratified flows such as those resulting from turbidity currents in complex geometrical environments. The validation involves not only flow phenomena inherent to flows generated by density imbalance but also convective effects brought about by the complex geometry of the water basin where the current propagates. This latter aspect is of paramount importance since, in real applications, currents may propagate in semi-confined geometries in plan view, generating important convective accelerations. Velocity fields and mass distributions obtained from experiments carried out at CERIS - (IST) are used as validation data for the model. The experimental set-up comprises a point source in a rectangular basin with a wall placed perpendicularly to the outer walls. Thus generates a complex 2D flow with an advancing wave front and shocks due to the flow reflection from the walls. STAV-2D is based on the depth- and time-averaged mass and momentum equations for mixtures of water and sediment, understood as continua. It is closed in terms of flow resistance and capacity bedload discharge by a set of classic closure models and a specific high concentration formulation. The two-layer model is derived from layer-averaged Navier-Stokes equations, resulting in a system of layer-specific non-linear shallow-water equations, solved through explicit first or second-order schemes. According to the experimental data for mass distribution, the
Distinguishing butchery cut marks from crocodile bite marks through machine learning methods.
Domínguez-Rodrigo, Manuel; Baquedano, Enrique
2018-04-10
All models of evolution of human behaviour depend on the correct identification and interpretation of bone surface modifications (BSM) on archaeofaunal assemblages. Crucial evolutionary features, such as the origin of stone tool use, meat-eating, food-sharing, cooperation and sociality can only be addressed through confident identification and interpretation of BSM, and more specifically, cut marks. Recently, it has been argued that linear marks with the same properties as cut marks can be created by crocodiles, thereby questioning whether secure cut mark identifications can be made in the Early Pleistocene fossil record. Powerful classification methods based on multivariate statistics and machine learning (ML) algorithms have previously successfully discriminated cut marks from most other potentially confounding BSM. However, crocodile-made marks were marginal to or played no role in these comparative analyses. Here, for the first time, we apply state-of-the-art ML methods on crocodile linear BSM and experimental butchery cut marks, showing that the combination of multivariate taphonomy and ML methods provides accurate identification of BSM, including cut and crocodile bite marks. This enables empirically-supported hominin behavioural modelling, provided that these methods are applied to fossil assemblages.
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
NotaMark industrial laser marking system: a new security marking technology
Moreau, Vincent G.
2004-06-01
Up until now, the only variable alphanumeric data which could be added to banknotes was the number, applied by means of impact typographical numbering boxes. As an additional process or an alternative to this mechanical method, a non-contact laser marking process can be used offering high quality and greater levels of flexibility. For this purpose KBA-GIORI propose an exclusive laser marking solution called NotaMark. The laser marking process NotaMark is the ideal solution for applying variable data and personalizing banknotes (or any other security documents) with a very high resolution, for extremely large production volumes. A completely integrated solution has been developed comprised of laser light sources, marking head units, and covers and extraction systems. NotaMark allows the marking of variable data by removing locally and selectively, specific printed materials leaving the substrate itself untouched. A wide range of materials has already been tested extensively. NotaMark is a new security feature which is easy to identify and difficult to counterfeit, and which complies with the standard mechanical and chemical resistance tests in the security printing industry as well as with other major soiling tests. The laser marking process opens up a whole new range of design possibilities and can be used to create a primary security feature such as numbering, or to enhance the value of existing features.
International Nuclear Information System (INIS)
Miller, W.H.; Hase, W.L.; Darling, C.L.
1989-01-01
A simple model is proposed for correcting problems with zero point energy in classical trajectory simulations of dynamical processes in polyatomic molecules. The ''problems'' referred to are that classical mechanics allows the vibrational energy in a mode to decrease below its quantum zero point value, and since the total energy is conserved classically this can allow too much energy to pool in other modes. The proposed model introduces hard sphere-like terms in action--angle variables that prevent the vibrational energy in any mode from falling below its zero point value. The algorithm which results is quite simple in terms of the cartesian normal modes of the system: if the energy in a mode k, say, decreases below its zero point value at time t, then at this time the momentum P k for that mode has its sign changed, and the trajectory continues. This is essentially a time reversal for mode k (only exclamation point), and it conserves the total energy of the system. One can think of the model as supplying impulsive ''quantum kicks'' to a mode whose energy attempts to fall below its zero point value, a kind of ''Planck demon'' analogous to a Brownian-like random force. The model is illustrated by application to a model of CH overtone relaxation
Energy Technology Data Exchange (ETDEWEB)
Walder, Brennan J.; Davis, Michael C.; Grandinetti, Philip J. [Department of Chemistry, Ohio State University, 100 West 18th Avenue, Columbus, Ohio 43210 (United States); Dey, Krishna K. [Department of Physics, Dr. H. S. Gour University, Sagar, Madhya Pradesh 470003 (India); Baltisberger, Jay H. [Division of Natural Science, Mathematics, and Nursing, Berea College, Berea, Kentucky 40403 (United States)
2015-01-07
A new two-dimensional Nuclear Magnetic Resonance (NMR) experiment to separate and correlate the first-order quadrupolar and chemical/paramagnetic shift interactions is described. This experiment, which we call the shifting-d echo experiment, allows a more precise determination of tensor principal components values and their relative orientation. It is designed using the recently introduced symmetry pathway concept. A comparison of the shifting-d experiment with earlier proposed methods is presented and experimentally illustrated in the case of {sup 2}H (I = 1) paramagnetic shift and quadrupolar tensors of CuCl{sub 2}⋅2D{sub 2}O. The benefits of the shifting-d echo experiment over other methods are a factor of two improvement in sensitivity and the suppression of major artifacts. From the 2D lineshape analysis of the shifting-d spectrum, the {sup 2}H quadrupolar coupling parameters are 〈C{sub q}〉 = 118.1 kHz and 〈η{sub q}〉 = 0.88, and the {sup 2}H paramagnetic shift tensor anisotropy parameters are 〈ζ{sub P}〉 = − 152.5 ppm and 〈η{sub P}〉 = 0.91. The orientation of the quadrupolar coupling principal axis system (PAS) relative to the paramagnetic shift anisotropy principal axis system is given by (α,β,γ)=((π)/2 ,(π)/2 ,0). Using a simple ligand hopping model, the tensor parameters in the absence of exchange are estimated. On the basis of this analysis, the instantaneous principal components and orientation of the quadrupolar coupling are found to be in excellent agreement with previous measurements. A new point dipole model for predicting the paramagnetic shift tensor is proposed yielding significantly better agreement than previously used models. In the new model, the dipoles are displaced from nuclei at positions associated with high electron density in the singly occupied molecular orbital predicted from ligand field theory.
Valenza, G; Romigi, A; Citi, L; Placidi, F; Izzi, F; Albanese, M; Scilingo, E P; Marciani, M G; Duggento, A; Guerrisi, M; Toschi, N; Barbieri, R
2016-08-01
Symptoms of temporal lobe epilepsy (TLE) are frequently associated with autonomic dysregulation, whose underlying biological processes are thought to strongly contribute to sudden unexpected death in epilepsy (SUDEP). While abnormal cardiovascular patterns commonly occur during ictal events, putative patterns of autonomic cardiac effects during pre-ictal (PRE) periods (i.e. periods preceding seizures) are still unknown. In this study, we investigated TLE-related heart rate variability (HRV) through instantaneous, nonlinear estimates of cardiovascular oscillations during inter-ictal (INT) and PRE periods. ECG recordings from 12 patients with TLE were processed to extract standard HRV indices, as well as indices of instantaneous HRV complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra) obtained through definition of inhomogeneous point-process nonlinear models, employing Volterra-Laguerre expansions of linear, quadratic, and cubic kernels. Experimental results demonstrate that the best INT vs. PRE classification performance (balanced accuracy: 73.91%) was achieved only when retaining the time-varying, nonlinear, and non-stationary structure of heartbeat dynamical features. The proposed approach opens novel important avenues in predicting ictal events using information gathered from cardiovascular signals exclusively.
Andrade, Maria Izabel Siqueira de; Oliveira, Juliana Souza; Leal, Vanessa Sá; Lima, Niedja Maria da Silva; Costa, Emília Chagas; Aquino, Nathalia Barbosa de; Lira, Pedro Israel Cabral de
2016-06-01
To identify cutoff points of the Homeostatic Model Assessment for Insulin Resistance (HOMA-IR) index established for adolescents and discuss their applicability for the diagnosis of insulin resistance in Brazilian adolescents. A systematic review was performed in the PubMed, Lilacs and SciELO databases, using the following descriptors: "Adolescents", "insulin resistance" and "ROC curve". Original articles carried out with adolescents published between 2005 and 2015 in Portuguese, English or Spanish languages, which included the statistical analysis using ROC curve to determine the index cutoff (HOMA-IR) were included. A total of 184 articles were identified and after the study phases were applied, seven articles were selected for the review. All selected studies established their cutoffs using a ROC curve, with the lowest observed cutoff of 1.65 for girls and 1.95 for boys and the highest of 3.82 for girls and 5.22 for boys. Of the studies analyzed, one proposed external validity, recommending the use of the HOMA-IR cutoff >2.5 for both genders. The HOMA-IR index constitutes a reliable method for the detection of insulin resistance in adolescents, as long as it uses cutoffs that are more adequate for the reality of the study population, allowing early diagnosis of insulin resistance and enabling multidisciplinary interventions aiming at health promotion of this population. Copyright © 2015 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.
Improving access in gastroenterology: The single point of entry model for referrals
Novak, Kerri L; Van Zanten, Sander Veldhuyzen; Pendharkar, Sachin R
2013-01-01
In 2005, a group of academic gastroenterologists in Calgary (Alberta) adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM) for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta). SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI), in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies. PMID:24040629
Simulation model of ANN based maximum power point tracking controller for solar PV system
Energy Technology Data Exchange (ETDEWEB)
Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)
2011-02-15
In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)
Center variation in the use of nonstandardized model for end-stage liver disease exception points.
Goldberg, David S; Makar, George; Bittermann, Therese; French, Benjamin
2013-12-01
The Model for End-Stage Liver Disease (MELD) score is an imperfect prognosticator of waitlist dropout, so transplant centers may apply for exception points to increase a waitlist candidate's priority on the waitlist. Exception applications are categorized as recognized exceptional diagnoses (REDs; eg, hepatocellular carcinoma) and non-REDs (eg, cholangitis). Although prior work has demonstrated regional variation in the use of exceptions, no work has examined the between-center variability. We analyzed all new waitlist candidates from February 27, 2002 to June 3, 2011 to explore variations in the use of non-REDs, for which no strict exception criteria exist. There were 58,641 new waitlist candidates, and 4356 (7.4%) applied for a non-RED exception. The number of applications increased steadily over time, as did the approval rates for such applications: from variation to ensure the appropriate and equitable use of non-RED exceptions. © 2013 American Association for the Study of Liver Diseases.
Common data model access; a unified layer to access data from data analysis point of view
International Nuclear Information System (INIS)
Poirier, S.; Buteau, A.; Ounsy, M.; Rodriguez, C.; Hauser, N.; Lam, T.; Xiong, N.
2012-01-01
For almost 20 years, the scientific community of neutron and synchrotron institutes have been dreaming of a common data format for exchanging experimental results and applications for reducing and analyzing the data. Using HDF5 as a data container has become the standard in many facilities. The big issue is the standardization of the data organization (schema) within the HDF5 container. By introducing a new level of indirection for data access, the Common-Data-Model-Access (CDMA) framework proposes a solution and allows separation of responsibilities between data reduction developers and the institute. Data reduction developers are responsible for data reduction code; the institute provides a plug-in to access the data. The CDMA is a core API that accesses data through a data format plug-in mechanism and scientific application definitions (sets of keywords) coming from a consensus between scientists and institutes. Using a innovative 'mapping' system between application definitions and physical data organizations, the CDMA allows data reduction application development independent of the data file container AND schema. Each institute develops a data access plug-in for its own data file formats along with the mapping between application definitions and its data files. Thus data reduction applications can be developed from a strictly scientific point of view and are immediately able to process data acquired from several institutes. (authors)
Improving Access in Gastroenterology: The Single Point of Entry Model for Referrals
Directory of Open Access Journals (Sweden)
Kerri L Novak
2013-01-01
Full Text Available In 2005, a group of academic gastroenterologists in Calgary (Alberta adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta. SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI, in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies.
Improving access in gastroenterology: the single point of entry model for referrals.
Novak, Kerri; Veldhuyzen Van Zanten, Sander; Pendharkar, Sachin R
2013-11-01
In 2005, a group of academic gastroenterologists in Calgary (Alberta) adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM) for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta). SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI), in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies.
New approach to accuracy verification of 3D surface models: An analysis of point cloud coordinates.
Lee, Wan-Sun; Park, Jong-Kyoung; Kim, Ji-Hwan; Kim, Hae-Young; Kim, Woong-Chul; Yu, Chin-Ho
2016-04-01
The precision of two types of surface digitization devices, i.e., a contact probe scanner and an optical scanner, and the trueness of two types of stone replicas, i.e., one without an imaging powder (SR/NP) and one with an imaging powder (SR/P), were evaluated using a computer-aided analysis. A master die was fabricated from stainless steel. Ten impressions were taken, and ten stone replicas were prepared from Type IV stone (Fujirock EP, GC, Leuven, Belgium). The precision of two types of scanners was analyzed using the root mean square (RMS), measurement error (ME), and limits of agreement (LoA) at each coordinate. The trueness of the stone replicas was evaluated using the total deviation. A Student's t-test was applied to compare the discrepancies between the CAD-reference-models of the master die (m-CRM) and point clouds for the two types of stone replicas (α=.05). The RMS values for the precision were 1.58, 1.28, and 0.98μm along the x-, y-, and z-axes in the contact probe scanner and 1.97, 1.32, and 1.33μm along the x-, y-, and z-axes in the optical scanner, respectively. A comparison with m-CRM revealed a trueness of 7.10μm for SR/NP and 8.65μm for SR/P. The precision at each coordinate (x-, y-, and z-axes) was revealed to be higher than the one assessed in the previous method (overall offset differences). A comparison between the m-CRM and 3D surface models of the stone replicas revealed a greater dimensional change in SR/P than in SR/NP. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Combining DFT, Cluster Expansions, and KMC to Model Point Defects in Alloys
Modine, N. A.; Wright, A. F.; Lee, S. R.; Foiles, S. M.; Battaile, C. C.; Thomas, J. C.; van der Ven, A.
In an alloy, defect energies are sensitive to the occupations of nearby atomic sites, which leads to a distribution of defect properties. When radiation-induced defects diffuse from their initially non-equilibrium locations, this distribution becomes time-dependent. The defects can become trapped in energetically favorable regions of the alloy leading to a diffusion rate that slows dramatically with time. Density Functional Theory (DFT) allows the accurate determination of ground state and transition state energies for a defect in a particular alloy environment but requires thousands of processing hours for each such calculation. Kinetic Monte-Carlo (KMC) can be used to model defect diffusion and the changing distribution of defect properties but requires energy evaluations for millions of local environments. We have used the Cluster Expansion (CE) formalism to ``glue'' together these seemingly incompatible methods. The occupation of each alloy site is represented by an Ising-like variable, and products of these variables are used to expand quantities of interest. Once a CE is fit to a training set of DFT energies, it allows very rapid evaluation of the energy for an arbitrary configuration, while maintaining the accuracy of the underlying DFT calculations. These energy evaluations are then used to drive our KMC simulations. We will demonstrate the application of our DFT/MC/KMC approach to model thermal and carrier-induced diffusion of intrinsic point defects in III-V alloys. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE.
A heuristic model for computational prediction of human branch point sequence.
Wen, Jia; Wang, Jue; Zhang, Qing; Guo, Dianjing
2017-10-24
Pre-mRNA splicing is the removal of introns from precursor mRNAs (pre-mRNAs) and the concurrent ligation of the flanking exons to generate mature mRNA. This process is catalyzed by the spliceosome, where the splicing factor 1 (SF1) specifically recognizes the seven-nucleotide branch point sequence (BPS) and the U2 snRNP later displaces the SF1 and binds to the BPS. In mammals, the degeneracy of BPS motifs together with the lack of a large set of experimentally verified BPSs complicates the task of BPS prediction in silico. In this paper, we develop a simple and yet efficient heuristic model for human BPS prediction based on a novel scoring scheme, which quantifies the splicing strength of putative BPSs. The candidate BPS is restricted exclusively within a defined BPS search region to avoid the influences of other elements in the intron and therefore the prediction accuracy is improved. Moreover, using two types of relative frequencies for human BPS prediction, we demonstrate our model outperformed other current implementations on experimentally verified human introns. We propose that the binding energy contributes to the molecular recognition involved in human pre-mRNA splicing. In addition, a genome-wide human BPS prediction is carried out. The characteristics of predicted BPSs are in accordance with experimentally verified human BPSs, and branch site positions relative to the 3'ss and the 5'end of the shortened AGEZ are consistent with the results of published papers. Meanwhile, a webserver for BPS predictor is freely available at http://biocomputer.bio.cuhk.edu.hk/BPS .