Minimization of Deterministic Fuzzy Tree Automata
Directory of Open Access Journals (Sweden)
S. Moghari
2014-03-01
Full Text Available Until now, some methods for minimizing deterministic fuzzy finite tree automata (DFFTA and weighted tree automata have been established by researchers. Those methods are language preserving, but the behavior of original automata and minimized one may be different. This paper, considers both language preserving and behavior preserving in minimization process. We drive Myhill-Nerode kind theorems corresponding to each proposed method and introduce PTIME algorithms for behaviorally and linguistically minimization. The proposed minimization algorithms are based on two main steps. The first step includes finding dependency between equivalency of states, according to the set of transition rules of DFFTA, and making merging dependency graph (MDG. The second step is refinement of MDG and making minimization equivalency set (MES. Additionally, behavior preserving minimization of DFFTA requires a pre-processing for modifying fuzzy membership grade of rules and final states, which is called normalization.
Nonlinear Markov processes: Deterministic case
Energy Technology Data Exchange (ETDEWEB)
Frank, T.D. [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)], E-mail: till.frank@uconn.edu
2008-10-06
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution.
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
Sochi, Taha
2014-01-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton, and Global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of Computational Fluid Dynamics for solving the flow fields in tubes and networks for various types of Newtoni...
Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study
Directory of Open Access Journals (Sweden)
King John R
2010-03-01
Full Text Available Abstract Background Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. Results In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Conclusions Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.
A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs
Directory of Open Access Journals (Sweden)
Jooyong Yi
2013-09-01
Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.
McArt, J A A; Nydam, D V; Overton, M W
2015-03-01
The purpose of this study was to develop a deterministic economic model to estimate the costs associated with (1) the component cost per case of hyperketonemia (HYK) and (2) the total cost per case of HYK when accounting for costs related to HYK-attributed diseases. Data from current literature was used to model the incidence and risks of HYK (defined as a blood β-hydroxybutyrate concentration≥1.2 mmol/L), displaced abomasa (DA), metritis, disease associations, milk production, culling, and reproductive outcomes. The component cost of HYK was estimated based on 1,000 calvings per year; the incidence of HYK in primiparous and multiparous animals; the percent of animals receiving clinical treatment; the direct costs of diagnostics, therapeutics, labor, and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. Costs attributable to DA and metritis were estimated based on the incidence of each disease in the first 30 DIM; the number of cases of each disease attributable to HYK; the direct costs of diagnostics, therapeutics, discarded milk during treatment and the withdrawal period, veterinary service (DA only), and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. The component cost per case of HYK was estimated at $134 and $111 for primiparous and multiparous animals, respectively; the average component cost per case of HYK was estimated to be $117. Thirty-four percent of the component cost of HYK was due to future reproductive losses, 26% to death loss, 26% to future milk production losses, 8% to future culling losses, 3% to therapeutics, 2% to labor, and 1% to diagnostics. The total cost per case of HYK was estimated at $375 and $256 for primiparous and multiparous animals, respectively; the average total cost per case of HYK was $289. Forty-one percent of the total cost of HYK was due to the component cost of HYK, 33% to costs
Energy Technology Data Exchange (ETDEWEB)
Krepki, R.; Obermayer, K. [Technische Univ. Berlin (DE). Forschungsgruppe Computergestuetzte Informationssysteme (CIS); Pu, Y.; Meng, H. [State Univ. of New York, Buffalo, NY (United States). Dept. of Mechanical and Aerospace Engineering
2000-12-01
Recently we have presented a new particle tracking algorithm for the interrogation of 2D-PTV data [Kuzmanowski et al. (1998); Stellmacher and Obermayer (2000) Exp Fluids 28: 506 -518], which estimates particle correspondences and local flow-field parameters simultaneously. The new method is based on an algorithm recently proposed by Gold et al. [Pattern Recognition (1998) 31:1019-1031], and has two advantages: (1) It allows not only local velocity but also other local components of the flow field such as rotation and shear to be determine; and (2) it allows flow-field parameters also to be reliably determined in regions of high velocity gradients (e.g., vortices or shear flow).In this contribution we extend this algorithm to the interrogation of 3D holographic particle image velocimetry (PIV) data. Benchmarks with cross-correlation and nearest-neighbor methods show that the algorithm retains the superior performance which we have observed for the 2D case. Because PTV methods scale with the square of the number of particles rather than exponentially with the dimension of the interrogation cell, the new method is much faster than cross-correlation-based methods without sacrificing accuracy, and it is well adapted to the low particle seeding densities of holographic PIV methods. (orig.)
Obendorf, Hartmut
2009-01-01
The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.
Minimally Invasive Surgical Treatment of Acute Epidural Hematoma: Case Series
2016-01-01
Background and Objective. Although minimally invasive surgical treatment of acute epidural hematoma attracts increasing attention, no generalized indications for the surgery have been adopted. This study aimed to evaluate the effects of minimally invasive surgery in acute epidural hematoma with various hematoma volumes. Methods. Minimally invasive puncture and aspiration surgery were performed in 59 cases of acute epidural hematoma with various hematoma volumes (13–145 mL); postoperative follow-up was 3 months. Clinical data, including surgical trauma, surgery time, complications, and outcome of hematoma drainage, recovery, and Barthel index scores, were assessed, as well as treatment outcome. Results. Surgical trauma was minimal and surgery time was short (10–20 minutes); no anesthesia accidents or surgical complications occurred. Two patients died. Drainage was completed within 7 days in the remaining 57 cases. Barthel index scores of ADL were ≤40 (n = 1), 41–60 (n = 1), and >60 (n = 55); scores of 100 were obtained in 48 cases, with no dysfunctions. Conclusion. Satisfactory results can be achieved with minimally invasive surgery in treating acute epidural hematoma with hematoma volumes ranging from 13 to 145 mL. For patients with hematoma volume >50 mL and even cerebral herniation, flexible application of minimally invasive surgery would help improve treatment efficacy. PMID:27144170
Probabilistic and Deterministic Seismic Hazard Assessment: A Case Study in Babol
Directory of Open Access Journals (Sweden)
H.R. Tavakoli
2013-01-01
Full Text Available The risk of earthquake ground motion parameters in seismic design of structures and Vulnerabilityand risk assessment of these structures against earthquake damage are important. The damages caused by theearthquake engineering and seismology of the social and economic consequences are assessed. This paperdetermined seismic hazard analysis in Babol via deterministic and probabilistic methods. Deterministic andprobabilistic methods seem to be practical tools for mutual control of results and to overcome the weaknessof approach alone. In the deterministic approach, the strong-motion parameters are estimated for the maximumcredible earthquake, assumed to occur at the closest possible distance from the site of interest, withoutconsidering the likelihood of its occurrence during a specified exposure period. On the other hand, theprobabilistic approach integrates the effects of all earthquakes expected to occur at different locations duringa specified life period, with the associated uncertainties and randomness taken into account. The calculatedbedrock horizontal and vertical peak ground acceleration (PGA for different years return period of the studyarea are presented.
International Nuclear Information System (INIS)
The concept of fractal geometry has proved to be a useful and fruitful tool for the description of complex systems in various fields of science and technology. Among the diverse types of fractals there are the deterministic ones that enable an easy explanation of basic features of fractal behaviour. (author)
Barbouchi, Meriem; Chokmani, Karem; Ben Aissa, Nadhira; Lhissou, Rachid; El Harti, Abderrazak; Abdelfattah, Riadh
2013-04-01
Soil salinization hazard in semi-arid regions such as Central Morocco is increasingly affecting arable lands and this is due to combined effects of anthropogenic activities (development of irrigation) and climate change (Multiplying drought episodes). In a rational strategy of fight against this hazard, salinity mapping is a key step to ensure effective spatiotemporal monitoring. The objective of this study is to test the effectiveness of geostatistical approach in mapping soil salinity compared to more forward deterministic interpolation methods. Three soil salinity sampling campaigns (27 September, 24 October and 19 November 2011) were conducted over the irrigated area of the Tadla plain, situated between the High and Middle Atlasin Central Morocco. Each campaign was made of 38 surface soil samples (upper 5 cm). From each sample the electrical conductivity (EC) was determined in saturated paste extract and used subsequently as proxy of soil salinity. The potential of deterministic interpolation methods (IDW) and geostatistical techniques (Ordinary Kriging) in mapping surface soil salinity was evaluated in a GIS environment through cross-validation technique. Field measurements showed that the soil salinity was generally low except during the second campaign where a significant increase in EC values was recorded. Interpolation results showed a better performance with geostatistical approach compared to deterministic one. Indeed, for all the campaigns, cross-validation yielded lower RMSE and bias for Kriging than IDW. However, the performance of the two methods was dependent on the range and the structure of the spatial variability of salinity. Indeed, Kriging showed better accuracy for the second campaign in comparison with the two others. This could be explained by the wider range of values of soil salinity during this campaign, which has resulted in a greater range of spatial dependence and has a better modeling of the spatial variability of salinity, which 'was
Coefficient of reversibility and two particular cases of deterministic many body systems
International Nuclear Information System (INIS)
We discuss the importance of a new measure of chaos in study of nonlinear dynamic systems, the - coefficient of reversibility-. This is defined as the probability of returning in the same point of phasic space. Is very interesting to compare this coefficient with other measures like fractal dimension or Liapunov exponent. We have also studied two very interesting many body systems, both having any number of particles but a deterministic evolution. One system is composed by n particles initially at rest, having the same mass and interacting through harmonic bi-particle forces, other is composed by two types of particles (with mass m1 and mass m2) initially at rest and interacting too through harmonic bi-particle forces
Directory of Open Access Journals (Sweden)
Sukanti Rout
2015-04-01
Full Text Available In this study an updated deterministic seismic hazard contour map of Bhubaneswar (20°12'0"N to 20°23'0"N latitude and 85°44'0"E to 85° 54'0"E longitude one of the major city of India with tourist importance, has been prepared in the form of spectral acceleration values. For assessing the seismic hazard, the study area has been divided into small grids of size 30˝×30˝ (approximately 1.0 km×1.0 km, and the hazard parameters in terms of spectral acceleration at bedrock level, PGA are calculated as the center of each of these grid cells by considering the regional Seismo-tectonic activity within 400 km radius around the city center. The maximum credible earthquake in terms of moment magnitude of 7.2 has been used for calculation of hazard parameter, results in PGA value of 0.017g towards the northeast side of the city and the corresponding maximum spectral acceleration as 0.0501g for a predominant period of 0.05s at bedrock level.
Using CSP To Improve Deterministic 3-SAT
Kutzkov, Konstantin
2010-01-01
We show how one can use certain deterministic algorithms for higher-value constraint satisfaction problems (CSPs) to speed up deterministic local search for 3-SAT. This way, we improve the deterministic worst-case running time for 3-SAT to O(1.439^n).
Asinari, P.
2011-03-01
Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).
International Nuclear Information System (INIS)
The title subject is easily explained. The deterministic effect was defined by ICRP recommendation 1990. The effect comes from a tissue injury derived from death of its stem cells induced by the acute high dose radiation, and leads to sterility in germ line and organ disorders in somatic cells. Clinically, the effect is unobservable at the lower dose than a threshold where the number of dead stem cells is small. The threshold is practically defined to be the dose at which the clinical symptom is observable in 1% of exposed humans (ICRP 2008). Restriction of the exposed dose to less than the threshold is important from the aspect of radiation protection. For practical risk assessment, defined are total low dose of 200 mSv apart from the dose rate, rate of 0.1 mSv/min apart from the total, and dose and dose rate effect factor (DDREF) of 3 (UNSCEAR 1993). Dividing stem cells are sensitive to radiation, and the threshold is variable dependently on the population of those cells in organs: e.g., the acute threshold doses of the testicle are 0.15 and 3.5-6.0 Gy for the temporary and complete infertility, respectively; ovary, 2.5-6.0 Gy for complete infertility; lens, 5.0 Gy for cataract; and bone marrow, 0.5 Gy for hematopoietic reduction. Fetal exposure at organogenesis (3rd-8th week of gestation) results in malformation with threshold 0.1-0.2 Gy, and at later than 9th week, lowered IQ and metal retardation of offspring with 0.1 Gy. Death of stem cells is not always specific to radiation as it occurs by anoxia and virus infection. Skin is sensitive to radiation as its stem cells exit in epidermal base layer and thereby tends to be injured even by IVR (interventional radiology). Exposed cells/tissues undergo the stochastic effect even when the deterministic effect is not evidently apparent, which is conceivably related with the secondary cancer formation derived from radiotherapy. (T.T.)
Kimmeier, Francesco; Bouzelboudjen, Mahmoud; Ababou, Rachid; Ribeiro, Luis
2014-01-01
In the framework of waste storage in geological formations at shallow or greater depths and accidental pollution, the numerical simulation of groundwater flow and contaminant transport represents an important instrument to predict and quantify the pollution as a function of time and space. The numerical simulation problem, and the required hydrogeologic data, are often approached in a deterministic fashion. However, deterministic models do not allow to evaluate the uncertainty of results. Fur...
A NEW CASE FOR IMAGE COMPRESSION USING LOGIC FUNCTION MINIMIZATION
Directory of Open Access Journals (Sweden)
Behrouz Zolfaghari
2011-05-01
Full Text Available Sum of minterms is a canonical form for representing logic functions. There are classical methods such as Karnaugh map or Quine–McCluskey tabulation for minimizing a sum of products. This minimization reduces the minterms to smaller products called implicants. If minterms are represented by bit strings, the bit strings shrink through the minimization process. This can be considered as a kind of data compression provided that there is a way for retrieving the original bit strings from the compressed strings. This paper proposes implements and evaluates an image compression method called YALMIC (Yet Another Logic Minimization Based Image Compression which depends on logic function minimization. This method considers adjacent pixels of the image as disjointed minterms constructing a logic function and compresses the 24-bit color images through minimizing the function. We compare the compression ratio of the proposed method to those of existing methods and show that YALMIC improves the compression ratio by about 25% on average.
Deterministic multidimensional nonuniform gap sampling
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Inferring deterministic causal relations
Daniusis, Povilas; Janzing, Dominik; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard
2012-01-01
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the ...
Minimally Invasive Approach to Eliminate Pyogenic Granuloma: A Case Report
Chandrashekar, B.
2012-01-01
Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure...
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro; Sørensen, Troels Bjerre
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...
Minimally Invasive Approach to Eliminate Pyogenic Granuloma: A Case Report
Directory of Open Access Journals (Sweden)
B. Chandrashekar
2012-01-01
Full Text Available Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure for the regression of pyogenic granuloma.
Minimally invasive approach to eliminate pyogenic granuloma: a case report.
Chandrashekar, B
2012-01-01
Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure for the regression of pyogenic granuloma. PMID:22567459
Modeling of deterministic chaotic systems
International Nuclear Information System (INIS)
The success of deterministic modeling of a physical system relies on whether the solution of the model would approximate the dynamics of the actual system. When the system is chaotic, situations can arise where periodic orbits embedded in the chaotic set have distinct number of unstable directions and, as a consequence, no model of the system produces reasonably long trajectories that are realized by nature. We argue and present physical examples indicating that, in such a case, though the model is deterministic and low dimensional, statistical quantities can still be reliably computed. copyright 1999 The American Physical Society
Minimally invasive approaches in pancreatic pseudocyst: a Case report
Directory of Open Access Journals (Sweden)
Rohollah Y
2009-09-01
Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: According to importance of post operative period, admission duration, post operative pain, and acceptable rate of complications, minimally invasive approaches with endoscope in pancreatic pseudocyst management becomes more popular, but the best choice of procedure and patient selection is currently not completely established. During past decade endoscopic procedures are become first choice in most authors' therapeutic plans, however, open surgery remains gold standard in pancreatic pseudocyst treatment."n"nMethods: we present here a patient with pancreatic pseudocyst unresponsive to conservative management that is intervened endoscopically before 6th week, and review current literatures to depict a schema to management navigation."n"nResults: A 16 year old male patient presented with two episodes of acute pancreatitis with abdominal pain, nausea and vomiting. Hyperamilasemia, pancreatic ascites and a pseudocyst were found in our preliminary investigation. Despite optimal conservative management, including NPO (nil per os and total parentral nutrition, after four weeks, clinical and para-clinical findings deteriorated. Therefore, ERCP and trans-papillary cannulation with placement of 7Fr stent was
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A 43-year-old Chinese patient with a history of psoriasis developed fulminant ulcerative colitis after immunosuppressive therapy for steroid-resistant minimal change disease was stopped. Minimal change disease in association with inflammatory bowel disease is a rare condition. We here report a case showing an association between ulcerative colitis, minimal change disease,and psoriasis. The possible pathological link between 3 diseases is discussed.
Inferring deterministic causal relations
Daniusis, Povilas; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard
2012-01-01
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.
Optimal Deterministic Auctions with Correlated Priors
Papadimitriou, Christos; Pierrakos, George
2010-01-01
We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction, resulting in a duality theorem and an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an NP-compl...
LENUS (Irish Health Repository)
Fanning, D M
2009-02-03
INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.
LENUS (Irish Health Repository)
Fanning, D M
2012-02-01
INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.
Sandon, Luiz Henrique Dias; Choi, Gun; Park, EunSoo; Lee, Hyung-Chang
2016-01-01
Background Thoracic disc surgeries make up only a small number of all spine surgeries performed, but they can have a considerable number of postoperative complications. Numerous approaches have been developed and studied in an attempt to reduce the morbidity associated with the procedure; however, we still encounter cases that develop serious and unexpected outcomes. Case Presentation This case report presents a patient with abducens nerve palsy after minimally invasive surgery for thoracic d...
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case
Energy Technology Data Exchange (ETDEWEB)
Bettoni, Dario; Colombo, Mattia; Liberati, Stefano, E-mail: bettoni@sissa.it, E-mail: mattia.colombo@studenti.unitn.it, E-mail: liberati@sissa.it [SISSA, Via Bonomea 265, Trieste, 34136 (Italy)
2014-02-01
Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales.
Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case
International Nuclear Information System (INIS)
Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales
Minimal TestCase Generation for Object-Oriented Software with State Charts
Ranjita Kumari Swain; Prafulla Kumar Behera; Durga Prasad Mohapatra
2012-01-01
Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is one of the key issues in software testing. This paper proposes an reduction approach to test data generation for the state-based software testing. In this paper, first state transition graph is derived from state chart diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases are generated. Lastly, a set of test cases are minimized by calcu...
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...
Deterministic dense coding with partially entangled states
International Nuclear Information System (INIS)
The utilization of a d-level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d. In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1. We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states
Deterministic hierarchical networks
Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.
2016-06-01
It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.
Dakwar, Elias; Rifkin, Stephen I; Volcan, Ildemaro J; Goodrich, J Allan; Uribe, Juan S
2011-06-01
Minimally invasive spine surgery is increasingly used to treat various spinal pathologies with the goal of minimizing destruction of the surrounding tissues. Rhabdomyolysis (RM) is a rare but known complication of spine surgery, and acute renal failure (ARF) is in turn a potential complication of severe RM. The authors report the first known case series of RM and ARF following minimally invasive lateral spine surgery. The authors retrospectively reviewed data in all consecutive patients who underwent a minimally invasive lateral transpsoas approach for interbody fusion with the subsequent development of RM and ARF at 2 institutions between 2006 and 2009. Demographic variables, patient home medications, preoperative laboratory values, and anesthetic used during the procedure were reviewed. All patient data were recorded including the operative procedure, patient positioning, postoperative hospital course, operative time, blood loss, creatine phosphokinase (CPK), creatinine, duration of hospital stay, and complications. Five of 315 consecutive patients were identified with RM and ARF after undergoing minimally invasive lateral transpsoas spine surgery. There were 4 men and 1 woman with a mean age of 66 years (range 60-71 years). The mean body mass index was 31 kg/m2 and ranged from 25 to 40 kg/m2. Nineteen interbody levels had been fused, with a range of 3-6 levels per patient. The mean operative time was 420 minutes and ranged from 315 to 600 minutes. The CPK ranged from 5000 to 56,000 U/L, with a mean of 25,861 U/L. Two of the 5 patients required temporary hemodialysis, while 3 required only aggressive fluid resuscitation. The mean duration of the hospital stay was 12 days, with a range of 3-25 days. Rhabdomyolysis is a rare but known potential complication of spine surgery. The authors describe the first case series associated with the minimally invasive lateral approach. Surgeons must be aware of the possibility of postoperative RM and ARF, particularly in
The human ECG nonlinear deterministic versus stochastic aspects
Kantz, H; Kantz, Holger; Schreiber, Thomas
1998-01-01
We discuss aspects of randomness and of determinism in electrocardiographic signals. In particular, we take a critical look at attempts to apply methods of nonlinear time series analysis derived from the theory of deterministic dynamical systems. We will argue that deterministic chaos is not a likely explanation for the short time variablity of the inter-beat interval times, except for certain pathologies. Conversely, densely sampled full ECG recordings possess properties typical of deterministic signals. In the latter case, methods of deterministic nonlinear time series analysis can yield new insights.
Trefan, Gyorgy
1993-01-01
The goal of this thesis is to contribute to the ambitious program of the foundation of developing statistical physics using chaos. We build a deterministic model of Brownian motion and provide a microscopic derivation of the Fokker-Planck equation. Since the Brownian motion of a particle is the result of the competing processes of diffusion and dissipation, we create a model where both diffusion and dissipation originate from the same deterministic mechanism--the deterministic interaction of that particle with its environment. We show that standard diffusion which is the basis of the Fokker-Planck equation rests on the Central Limit Theorem, and, consequently, on the possibility of deriving it from a deterministic process with a quickly decaying correlation function. The sensitive dependence on initial conditions, one of the defining properties of chaos insures this rapid decay. We carefully address the problem of deriving dissipation from the interaction of a particle with a fully deterministic nonlinear bath, that we term the booster. We show that the solution of this problem essentially rests on the linear response of a booster to an external perturbation. This raises a long-standing problem concerned with Kubo's Linear Response Theory and the strong criticism against it by van Kampen. Kubo's theory is based on a perturbation treatment of the Liouville equation, which, in turn, is expected to be totally equivalent to a first-order perturbation treatment of single trajectories. Since the boosters are chaotic, and chaos is essential to generate diffusion, the single trajectories are highly unstable and do not respond linearly to weak external perturbation. We adopt chaotic maps as boosters of a Brownian particle, and therefore address the problem of the response of a chaotic booster to an external perturbation. We notice that a fully chaotic map is characterized by an invariant measure which is a continuous function of the control parameters of the map
Reisch, Robert; Koechlin, Nicolas O; Marcus, Hani J
2016-09-01
Despite their predominantly histologically benign nature, intradural tumors may become symptomatic by virtue of their space-occupying effect, causing severe neurological deficits. The gold standard treatment is total excision of the lesion; however, extended dorsal and dorsolateral approaches may cause late complications due to iatrogenic destruction of the posterolateral elements of the spine. In this article, we describe our concept of minimally invasive spinal tumor surgery. Two illustrative cases demonstrate the feasibility and safety of keyhole fenestrations exposing the spinal canal. PMID:25336048
Deterministic manufacturing of large sapphire windows
Lambropoulus, Teddy; Fess, Ed; DeFisher, Scott
2013-06-01
There is a need for precisely figured large sapphire windows with dimensions of up to 20 inches with thicknesses of 0.25 inches that will operate in the 1- to 5-micron wavelength range. In an effort to reduce manufacturing cost during grinding and polishing, OptiPro Systems is developing technologies that provide an optimized deterministic approach to making them. This development work is focusing on two main areas of research. The first is optimizing existing technologies, like deterministic microgrinding and UltraForm Finishing (UFF), for shaping operations and precision controlled sub-aperture polishing. The second area of research consists of a new large aperture deterministic polishing process currently being developed at OptiPro called UltraSmooth Finishing (USF). The USF process utilizes deterministic control with a large aperture polishing tool. This presentation will discuss the challenges associated with manufacturing large sapphire windows and present results on the work that is being performed to minimize manufacturing costs associated with them.
Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.
2012-01-01
Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is
White sponge naevus with minimal clinical and histological changes: report of three cases.
Lucchese, Alberta; Favia, Gianfranco
2006-05-01
White sponge naevus (WSN) is a rare autosomal dominant disorder that predominantly affects non-cornified stratified squamous epithelia: oral mucosa, oesophagus, anogenital area. It has been shown to be related to keratin defects, because of mutations in the genes encoding mucosal-specific keratins K4 and K13. We illustrate three cases diagnosed as WSN, following the clinical and histological criteria, with unusual appearance. They presented with minimal clinical and histological changes that could be misleading in the diagnosis. The patients showed diffuse irregular plaques with a range of presentations from white to rose coloured mucosae involving the entire oral cavity. In one case the lesion was also present in the vaginal area. The histological findings included epithelial thickening, parakeratosis and extensive vacuolization of the suprabasal keratinocytes, confirming WSN diagnosis. Clinical presentation and histopathology of WSN are discussed in relation to the differential diagnosis of other oral leukokeratoses. PMID:16630298
Fighting with deterministic disturbances
Tibabishev, V N
2011-01-01
Consider the problem of interference mitigation in the identification of the dynamics of multidimensional control systems in the class of linear stationary models for single realizations of the observed signals. A concepts uncorrelated processes is not verifiable. Is entered the concept of system components of the signal measured on a semiring. Properties of signals are defined for systems of sets of linearly dependent and linearly independent measured signals. Frequency method is found to deal with noise on the set of deterministic functions. Example is considered to identify the dynamic characteristics of the aircraft on the data obtained in the regime of one automatic landing.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro;
2012-01-01
as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison......Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them, such...
Deterministic Global Optimization
Scholz, Daniel
2012-01-01
This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization
Deterministic Walks in Random Media
International Nuclear Information System (INIS)
Deterministic walks over a random set of N points in one and two dimensions (d=1,2 ) are considered. Points ('cities') are randomly scattered in Rd following a uniform distribution. A walker ('tourist'), at each time step, goes to the nearest neighbor city that has not been visited in the past τ steps. Each initial city leads to a different trajectory composed of a transient part and a final p -cycle attractor. Transient times (for d=1,2 ) follow an exponential law with a τ -dependent decay time but the density of p cycles can be approximately described by D(p)∝p-α (τ) . For τmuchgt1 and τ/Nmuchlt1 , the exponent is independent of τ . Some analytical results are given for the d=1 case
Minimally invasive surgery for superior mesenteric artery syndrome: A case report.
Yao, Si-Yuan; Mikami, Ryuichi; Mikami, Sakae
2015-12-01
Superior mesenteric artery (SMA) syndrome is defined as a compression of the third portion of the duodenum by the abdominal aorta and the overlying SMA. SMA syndrome associated with anorexia nervosa has been recognized, mainly among young female patients. The excessive weight loss owing to the eating disorder sometimes results in a reduced aorto-mesenteric angle and causes duodenal obstruction. Conservative treatment, including psychiatric and nutritional management, is recommended as initial therapy. If conservative treatment fails, surgery is often required. Currently, traditional open bypass surgery has been replaced by laparoscopic duodenojejunostomy as a curative surgical approach. However, single incision laparoscopic approach is rarely performed. A 20-year-old female patient with a diagnosis of anorexia nervosa and SMA syndrome was prepared for surgery after failed conservative management. As the patient had body image concerns, a single incision laparoscopic duodenojejunostomy was performed to achieve minimal scarring. As a result, good perioperative outcomes and cosmetic results were achieved. We show the first case of a young patient with SMA syndrome who was successfully treated by single incision laparoscopic duodenojejunostomy. This minimal invasive surgery would be beneficial for other patients with SMA syndrome associated with anorexia nervosa, in terms of both surgical and cosmetic outcomes. PMID:26668518
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
The deterministic and statistical Burgers equation
Fournier, J.-D.; Frisch, U.
Fourier-Lagrangian representations of the UV-region inviscid-limit solutions of the equations of Burgers (1939) are developed for deterministic and random initial conditions. The Fourier-mode amplitude behavior of the deterministic case is characterized by complex singularities with fast decrease, power-law preshocks with k indices of about -4/3, and shocks with k to the -1. In the random case, shocks are associated with a k to the -2 spectrum which overruns the smaller wavenumbers and appears immediately under Gaussian initial conditions. The use of the Hopf-Cole solution in the random case is illustrated in calculations of the law of energy decay by a modified Kida (1979) method. Graphs and diagrams of the results are provided.
Deterministically delayed pseudofractal networks
International Nuclear Information System (INIS)
On the basis of pseudofractal networks (PFNs), we propose a family of delayed pseudofractal networks (DPFNs) with a special feature that newly added edges delay producing new nodes, differing from the evolution algorithms of PFNs where all existing edges simultaneously generate new nodes. We obtain analytical formulae for degree distribution, clustering coefficient (C) and average path length (APL). We compare DPFNs and PFNs, and show that the exponent of the degree distribution of DPFNs is smaller than that of PFNs, meaning that the heterogeneity of this kind of delayed network is higher. Compared to PFNs, small-world features of DPFNs are more prominent (larger C and smaller APL). We also find that the delay strengthens the scale-free and small-world characteristics of DPFNs. In addition, we calculate and compare the mean first passage time (MFPT) numerically, revealing that the MFPT of DPFNs is shorter. Our study may help with a deeper understanding of various deterministically growing delayed networks
Minimal access direct spondylolysis repair using a pedicle screw-rod system: a case series
Directory of Open Access Journals (Sweden)
Mohi Eldin Mohamed
2012-11-01
Full Text Available Abstract Introduction Symptomatic spondylolysis is always challenging to treat because the pars defect causing the instability needs to be stabilized while segmental fusion needs to be avoided. Direct repair of the pars defect is ideal in cases of spondylolysis in which posterior decompression is not necessary. We report clinical results using segmental pedicle-screw-rod fixation with bone grafting in patients with symptomatic spondylolysis, a modification of a technique first reported by Tokuhashi and Matsuzaki in 1996. We also describe the surgical technique, assess the fusion and analyze the outcomes of patients. Case presentation At Cairo University Hospital, eight out of twelve Egyptian patients’ acute pars fractures healed after conservative management. Of those, two young male patients underwent an operative procedure for chronic low back pain secondary to pars defect. Case one was a 25-year-old Egyptian man who presented with a one-year history of axial low back pain, not radiating to the lower limbs, after falling from height. Case two was a 29-year-old Egyptian man who presented with a one-year history of axial low back pain and a one-year history of mild claudication and infrequent radiation to the leg, never below the knee. Utilizing a standardized mini-access fluoroscopically-guided surgical protocol, fixation was established with two titanium pedicle screws place into both pedicles, at the same level as the pars defect, without violating the facet joint. The cleaned pars defect was grafted; a curved titanium rod was then passed under the base of the spinous process of the affected vertebra, bridging the loose fragment, and attached to the pedicle screw heads, to uplift the spinal process, followed by compression of the defect. The patients were discharged three days after the procedure, with successful fusion at one-year follow-up. No rod breakage or implant-related complications were reported. Conclusions Where there is no
Wang, Wei-Lien; Torres-Cabala, Carlos; Curry, Jonathan L; Ivan, Doina; McLemore, Michael; Tetzlaff, Michael; Zembowicz, Artur; Prieto, Victor G; Lazar, Alexander J
2015-06-01
Atypical fibroxanthoma (AFX) is a dermal mesenchymal neoplasm arising in sun-damaged skin, primarily of the head and neck region of older men. Conservative excision cures most. However, varying degrees of subcutaneous involvement can lead to a more aggressive course and rare metastases. Thus, AFX involving the subcutis are termed pleomorphic dermal sarcomas or other monikers by some to recognize the more threatening natural history. We reviewed cases of "metastatic AFX" from our institution and from the files of a consultative dermatopathology practice. Nine of 152 patients with AFX were identified at a single institution (2000-2011). Two additional patients were identified from the files of a consultative practice. Clinical, radiological, and pathological features were reviewed and cases with histologically verified metastases identified. Median age was 67 (range, 45-91) years, all male, and involving the head and neck region. Two cases had no documented involvement of the subcutis, and 2 cases had only superficial subcutis involvement. Median time to metastases was 13 (range, 8-49) months. Three patients developed solitary regional lymph node metastases while 8 had widespread metastases. Five patients developed local recurrence within 8 months, and all 5 developed widespread metastasis. With median follow-up of 26 (range, 10-145) months, 6 died of disease (median, 19 months; range, 10-35 months), 4 were alive and well, and 1 was alive with disease. AFX has very rare metastatic potential, even those without or with minimal subcutis involvement, and can lead to mortality. Most metastasis and local recurrence occurred within 1 year of presentation. Solitary regional metastases were associated with better outcomes than those with multiple distant metastases. Patients with repeated local recurrences portended more aggressive disease including development of distant metastases. PMID:25590287
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...
An Approach to Composition Based on a Minimal Techno Case Study
Bougaïeff, Nicolas
2013-01-01
This dissertation examines key issues relating to minimal techno, a sub-genre of electronic dance music (EDM) that emerged in the early 1990s. These key issues are the aesthetics, composition, performance, and technology of minimal techno, as well as the economics of EDM production. The study aims to answer the following question. What is the musical and social significance of minimal techno production and performance? The study is conducted in two parts. The history of minimal music is ...
International Nuclear Information System (INIS)
Automated gauging stations are used to monitor the hydro-ecological effects of nuclear power stations. These stations continuously measure four physical chemical parameters: water temperature, dissolved oxygen content, pH and electrical conductivity. Every hour, they provide the results of water quality measurements on samples taken upstream, downstream and at the site of the power plants. This work proposes a series of tools for critically analysing and validating the collected data. They should provide a means of detecting the abnormal values, discontinuities and recording drifts most frequently observed. Using conventional statistical tests, the procedure developed compares the measured value with other information: other measurements or models forecasts. The models are based either on the internal properties of the time-dependent series of each variable considered, or on relationships with external hydro-meteorological variables: air temperature, solar radiation and flow rate. These links can be expressed either by a totally or partially deterministic model, or by a statistical model, both of which require prior calibration using past data. In particular, the models of Box and Jenkins, neural networks or deterministic models such as CALNAT or an adaptation of Biomox (EDF-Chatou) have been used. These methods and tools were developed and applied with a cross-validation procedure covering five years of data records for the river Loire at Dampierre (1990-1994). (author)
Ghirardini, G; Mohamed, M; Bartolamasi, A; Malmusi, S; Dalla Vecchia, E; Algeri, I; Zanni, A; Renzi, A; Cavicchioni, O; Braconi, A; Pazzoni, F; Alboni, C
2013-01-01
The objective of our study was to evaluate surgical outcome of minimally invasive vaginal hysterectomy (MIVH), using the bipolar vessel sealing system (BVSS; BiClamp®). The design was a retrospective analysis (Canadian Task-force Classification II-3). The setting was a secondary care hospital. Records of patients who underwent vaginal hysterectomy for benign indications in our centre between November 2005 and March 2011 were reviewed. The demographic patients' data, indications for surgery, patient history with regard to previous surgery, duration of surgery, blood loss (postoperative hemoglobin drop '∆Hb'), perioperative complications, and length of inpatient stay were collected from the medical records. The intervention was vaginal hysterectomy using BVSS (BiClamp®). Results showed that the mean duration of surgery was 48.9 ± 15.3 min (95% CI, 49.2-52.5). The mean duration of hospital stay was 3.2 ± 1.2 days (95% CI, 2.8-3.2). The mean ∆Hb was 1.4 ± 1.8 g/dl. Overall, conversion to laparotomy was required in three cases (0.6%). Only one haemoperitoneum occurred (0.2%) and this is the only case who required blood transfusion. The main indication for VH was uterine prolapse in 52.0% (n = 260) of cases; uterine fibroids in 37.4% (n = 187); adenomyosis uteri in 4.2% (n = 21); cervical dysplasia in 22 patients (4.4%) and in 2% (n = 10) of patients, endometrial hyperplasia and other pathologies were the indications for VH. It was concluded that electrosurgical bipolar vessel sealing by (BiClamp®) can provide a safe and feasible alternative to sutures in vaginal hysterectomy, resulting in reduced operative time and blood loss, with acceptable surgical outcomes. PMID:23259887
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
International Nuclear Information System (INIS)
Purpose: To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size/shape, head position during the examination and bowtie filter used on peak tissue doses. Methods: The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size/shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. Results: The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of
Energy Technology Data Exchange (ETDEWEB)
Perisinakis, Kostas; Seimenis, Ioannis; Tzedakis, Antonis; Papadakis, Antonios E.; Damilakis, John [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece); Medical Diagnostic Center ' Ayios Therissos,' P.O. Box 28405, Nicosia 2033, Cyprus and Department of Medical Physics, Medical School, Democritus University of Thrace, Panepistimioupolis, Dragana 68100, Alexandroupolis (Greece); Department of Medical Physics, University Hospital of Heraklion, P.O. Box 1352, Heraklion 71110, Crete (Greece); Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece)
2013-01-15
Purpose: To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size/shape, head position during the examination and bowtie filter used on peak tissue doses. Methods: The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size/shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. Results: The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of
Submicroscopic Deterministic Quantum Mechanics
Krasnoholovets, V
2002-01-01
So-called hidden variables introduced in quantum mechanics by de Broglie and Bohm have changed their initial enigmatic meanings and acquired quite reasonable outlines of real and measurable characteristics. The start viewpoint was the following: All the phenomena, which we observe in the quantum world, should reflect structural properties of the real space. Thus the scale 10^{-28} cm at which three fundamental interactions (electromagnetic, weak, and strong) intersect has been treated as the size of a building block of the space. The appearance of a massive particle is associated with a local deformation of the cellular space, i.e. deformation of a cell. The mechanics of a moving particle that has been constructed is deterministic by its nature and shows that the particle interacts with cells of the space creating elementary excitations called "inertons". The further study has disclosed that inertons are a substructure of the matter waves which are described by the orthodox wave \\psi-function formalism. The c...
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class of...... regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Operational State Complexity of Deterministic Unranked Tree Automata
Directory of Open Access Journals (Sweden)
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
Deterministic methods in radiation transport
Energy Technology Data Exchange (ETDEWEB)
Rice, A.F.; Roussin, R.W. (eds.)
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Regret Bounds for Deterministic Gaussian Process Bandits
de Freitas, Nando; Zoghi, Masrour
2012-01-01
This paper analyses the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of $O(\\frac{1}{\\sqrt{t}})$, where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to $O(e^{-\\frac{\\tau t}{(\\ln t)^{d/4}}})$ with high probability. Here, d is the dimension of the search space and $\\tau$ is a constant that depends on the behaviour of the objective function near its global maximum.
Deterministic Real-time Thread Scheduling
Yun, Heechul; Sha, Lui
2011-01-01
Race condition is a timing sensitive problem. A significant source of timing variation comes from nondeterministic hardware interactions such as cache misses. While data race detectors and model checkers can check races, the enormous state space of complex software makes it difficult to identify all of the races and those residual implementation errors still remain a big challenge. In this paper, we propose deterministic real-time scheduling methods to address scheduling nondeterminism in uniprocessor systems. The main idea is to use timing insensitive deterministic events, e.g, an instruction counter, in conjunction with a real-time clock to schedule threads. By introducing the concept of Worst Case Executable Instructions (WCEI), we guarantee both determinism and real-time performance.
Self-organized criticality in deterministic systems with disorder
Rios, Paolo De Los; Valleriani, Angelo; Vega, Jose Luis
1997-01-01
Using the Bak-Sneppen model of biological evolution as our paradigm, we investigate in which cases noise can be substituted with a deterministic signal without destroying Self-Organized Criticality (SOC). If the deterministic signal is chaotic the universality class is preserved; some non-universal features, such as the threshold, depend on the time correlation of the signal. We also show that, if the signal introduced is periodic, SOC is preserved but in a different universality class, as lo...
Minimally Invasive Antral Membrane Balloon Elevation (MIAMBE: A 3 cases report
Directory of Open Access Journals (Sweden)
Roberto Arroyo
2013-12-01
Full Text Available ABSTRACT Long-standing partial edentulism in the posterior segment of an atrophic maxilla is a challenging treatment. Sinus elevation via Cadwell Luc has several anatomical restrictions, post-operative discomfort and the need of complex surgical techniques. The osteotome approach is a significantly safe and efficient tecnique, as a variation of this technique the "minimal invasive antral membrane balloon elevation" (MIAMBE has been developed, which use a hydraulic system. We present three cases in which the system was used MIAMBE for tooth replacement in the posterior. This procedure seems to be a relatively simple and safe solution for the insertion of endo-osseus implants in the posterior atrophic maxilla. RESUMEN El edentulismo parcial de larga data en el segmento posterior en un maxilar atrófico supone un reto terapéutico. La elevación de seno vía Cadwell Luc presenta restricciones anatómicas, incomodidades post-operatorias y la necesidad de técnicas quirúrgicas complejas. El enfoque con osteotomos tiene una eficacia y seguridad considerable, como una variación a esta se ha desarrollado la "elevación mínimamente invasiva mediante globo de la membrana antral" (MIAMBE, que utiliza un sistema hidráulico. Se presentan tres casos en los que se utilizó el sistema MIAMBE para el reemplazo de dientes en el sector posterior. Este procedimiento parece ser una solución relativamente sencilla y segura para inserción de implates endo-óseos en el caso de un maxilar atrófico posterior.
Directory of Open Access Journals (Sweden)
Stephen Faddegon
2013-04-01
Full Text Available Background and Purpose Horseshoe kidney is an uncommon renal anomaly often associated with ureteropelvic junction (UPJ obstruction. Advanced minimally invasive surgical (MIS reconstructive techniques including laparoscopic and robotic surgery are now being utilized in this population. However, fewer than 30 cases of MIS UPJ reconstruction in horseshoe kidneys have been reported. We herein report our experience with these techniques in the largest series to date. Materials and Methods We performed a retrospective chart review of nine patients with UPJ obstruction in horseshoe kidneys who underwent MIS repair at our institution between March 2000 and January 2012. Four underwent laparoscopic, two robotic, and one laparoendoscopic single-site (LESS dismembered pyeloplasty. An additional two pediatric patients underwent robotic Hellstrom repair. Perioperative outcomes and treatment success were evaluated. Results Median patient age was 18 years (range 2.5-62 years. Median operative time was 136 minutes (range 109-230 min. and there were no perioperative complications. After a median follow-up of 11 months, clinical (symptomatic success was 100%, while radiographic success based on MAG-3 renogram was 78%. The two failures were defined by prolonged t1/2 drainage, but neither patient has required salvage therapy as they remain asymptomatic with stable differential renal function. Conclusions MIS repair of UPJ obstruction in horseshoe kidneys is feasible and safe. Although excellent short-term clinical success is achieved, radiographic success may be lower than MIS pyeloplasty in heterotopic kidneys, possibly due to inherent differences in anatomy. Larger studies are needed to evaluate MIS pyeloplasty in this population.
Minimally invasive transforaminal lumbar interbody fusion Results of 23 consecutive cases
Directory of Open Access Journals (Sweden)
Amit Jhala
2014-01-01
Conclusion: The study demonstrates a good clinicoradiological outcome of minimally invasive TLIF. It is also superior in terms of postoperative back pain, blood loss, hospital stay, recovery time as well as medication use.
Minimally invasive video-assisted thyroidectomy: experience of 200 cases in a single center
Haitao, Zheng; Jie, Xu; Lixin, Jiang
2014-01-01
Introduction Minimally invasive techniques in thyroid surgery including video-assisted technique originally described by Miccoli have been accepted in several continents for more than 10 years. Aim To analyze our preliminary results from minimally invasive video-assisted thyroidectomy (MIVAT) and to evaluate the feasibility and effects of this method in a general department over a 4-year period. Material and methods Initial experience was presented based on a series of 200 patients selected f...
Directory of Open Access Journals (Sweden)
Yunzhi ZHOU
2010-01-01
Full Text Available Background and objective TACE, Ar-He target cryosurgery and radioactive seeds implantation are the mainly micro-invasive methods in the treatment of lung cancer. This article summarizes the survival quality after treatment, the clinical efficiency and survival period, and analyzes the advantages and shortcomings of each methods so as to evaluate the clinical effect of non-small cell lung cancer with multiple minimally invasive treatment. Methods All the 139 cases were nonsmall cell lung cancer patients confirmed by pathology and with follow up from July 2006 to July 2009 retrospectively, and all of them lost operative chance by comprehensive evaluation. Different combination of multiple minimally invasive treatments were selected according to the blood supply, size and location of the lesion. Among the 139 cases, 102 cases of primary and 37 cases of metastasis to mediastinum, lung and chest wall, 71 cases of abundant blood supply used the combination of superselective target artery chemotherapy, Ar-He target cryoablation and radiochemotherapy with seeds implantation; 48 cases of poor blood supply use single Ar-He target cryoablation; 20 cases of poor blood supply use the combination of Ar-He target cryoablation and radiochemotheraoy with seeds implantation. And then the pre- and post-treatment KPS score, imaging data and the result of follow up were analyzed. Results The KPS score increased 20.01 meanly after the treatment. Follow up 3 years, 44 cases of CR, 87 cases of PR, 3 cases of NC and 5 cases of PD, and the efficiency was 94.2%. Ninety-nine cases of 1 year survival (71.2%, 43 cases of 2 years survival (30.2%, 4 cases with over 3 years survival and the median survival was 19 months. Average survival was (16±1.5months. There was no severe complications, such as spinal cord injury, vessel and pericardial aspiration. Conclusion Minimally invasive technique is a highly successful, micro-invasive and effective method with mild complications
Waste minimization in a research and development environment - a case history
International Nuclear Information System (INIS)
Brookhaven National Laboratory (BNL) research and development activities generate small and variable waste streams that present a unique minimization challenge. This paper describes how B ampersand V Waste Science and TEchnology Corp. successfully planned and organized an assessment of these waste streams. It describes the procedures chosen to collect and evaluate data and the procedure adopted to determine the feasibility of waste minimization methods and program elements. The paper gives a brief account of the implementation of the assessment and summarizes the assessment results and recommendations. Also, the paper briefly describes a manual developed to train staff on materials handling and storage methods and a general information brochure to educate employees and visiting researchers. Both documents covered handling, storage, and disposal procedures that could be used to eliminate or minimize hazardous waste discharges to the environment
Deterministic computation of functional integrals
International Nuclear Information System (INIS)
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Deterministic dynamic behaviour
International Nuclear Information System (INIS)
The dynamic load as the second given quantity in a dynamic analysis is less problematic as a rule than structure mapping, except for those cases where it cannot be specified completely independently directly for the structure but interacts with the non-structure environment or is influenced by the latter. In these cases, one should always check whether the study cannot be simplified by separate investigations of the two types of problems. The determination of the system response from the given quantities 'structure' and 'load' is the central function of dynamic analysis, although the importance and problems of the mapping steps should not be underestimated. The paper focuses on some aspects of this problem. The available methods are classified as modal and non-modal (direct) methods. In the first of these, the eigenvectors of the system are used as generalizing coordinates, while the degrees of freedom describing the model are used in the latter. The criteria for assessing methods of calculation are the accuracy and numerical stability of their solutions as well as their simplicity of use. Response quantities are represented (and calculated) in the form of response-time functions, frequency response functions, spectral density functions (or stochastic parameters derived from these), response spectra. The multitude of problems in dynamic studies requires also a multitude of possible approaches, and the selection of the most appropriate method for a given case is no slight task. (orig./GL)
Savaş Karyağar; Karyağar, Sevda S; Orhan Yalçın; Enis Yüney; Mehmet Mülazımoğlu; Tevfik Özpaçacı; Oğuzhan Karatepe; Yaşar Özdenkaya
2013-01-01
Objective: In this study, our aim was to study the efficiency of gamma probe guided minimally invasive parathyroidectomy (GP-MIP), conducted without the intra-operative quick parathyroid hormone (QPTH) measurement in the cases of solitary parathyroid adenomas (SPA) detected with USG and dual phase 99mTc-MIBI parathyroid scintigraphy (PS) in the preoperative period. Material and Methods: This clinical study was performed in 31 SPA patients (27 female, 4 male; mean age 51±11years) between Febru...
International Nuclear Information System (INIS)
Objective: To investigate the effectiveness, technical points and complications of the minimally-invasive treatment for iatrogenic intravenous foreign bodies. Methods: Five patients with iatrogenic intravenous foreign bodies due to the fracture or shift of venous catheter were enrolled in this study. By using grasping device, which was inserted into the target vein via right femoral vein, the foreign bodies within the venous system were successfully eliminated. Results: The vascular foreign bodies were successfully removed in all five patients, with a success rate of 100%. No operation-related complications, such as vascular rupture, pulmonary embolism, etc. occurred. Conclusion: As a minimally-invasive technique, the use grasping device for removing the iatrogenic vascular foreign bodies has higher success rate; thus, major surgical procedures can be avoided. (authors)
Minimally invasive video-assisted thyroidectomy: seven-year experience with 240 cases
Barczyński, Marcin; Konturek, Aleksander; Stopa, Małgorzata; Papier, Aleksandra; Nowak, Wojciech
2012-01-01
Introduction Minimally invasive video-assisted thyroidectomy (MIVAT) has gained acceptance in recent years as an alternative to conventional thyroid surgery. Aim Assessment of our 7-year experience with MIVAT. Material and methods A retrospective study of 240 consecutive patients who underwent MIVAT at our institution between 01/2004 and 05/2011 was conducted. The inclusion criterion was a single thyroid nodule below 30 mm in diameter within the thyroid of 25 ml or less in volume. The exclusi...
del Vecchio, Jorge Javier; Ghioldi, Mauricio; Raimondi, Nicolás; De Elias, Manuel
2016-01-01
Fracture dislocations involving the Lisfranc joint are rare; they represent only 0.2% of all the fractures. There is no consensus about the surgical management of these lesions in the medical literature. However, both anatomical reduction and tarsometatarsal stabilization are essential for a good outcome. In this clinical study, five consecutive patients with a diagnosis of Lisfranc low-energy lesion were treated with a novel surgical technique characterized by minimal osteosynthesis performed through a minimally invasive approach. According to the radiological criteria established, the joint reduction was anatomical in four patients, almost anatomical in one patient (#4), and nonanatomical in none of the patients. At the final follow-up, the AOFAS score for the midfoot was 96 points (range, 95–100). The mean score according to the VAS (Visual Analog Scale) at the end of the follow-up period was 1.4 points over 10 (range, 0–3). The surgical technique described in this clinical study is characterized by the use of implants with the utilization of a novel approach to reduce joint and soft tissue damage. We performed a closed reduction and minimally invasive stabilization with a bridge plate and a screw after achieving a closed anatomical reduction. PMID:27340569
Zietek, Pawel; Karaczun, Maciej; Kruk, Bartosz; Szczypior, Karina
2016-01-01
Achilles injury is a common musculoskeletal disorder. Bilateral rupture of the Achilles tendon, however, is much less common and usually occurs spontaneously. Complete, traumatic, and bilateral ruptures are rare and typically require long periods of immobilization before the patient can return to full weightbearing. A 52-year-old male was hospitalized for bilateral traumatic rupture to both Achilles tendons. No risk factors for tendon rupture were found. Blood samples revealed no peripheral blood pathologic features. Both tendons were repaired with percutaneous, minimally invasive surgery using the Achillon(®) tendon suture system. Rehabilitation was begun 4 weeks later. An ankle-foot orthosis was prescribed to provide ankle support with an adjustable range of movement, and active plantar flexion was set at 0° to 30°. The patient remained non-weightbearing with the ankle-foot orthosis device and performed active range-of-motion exercises. At 8 weeks after surgery, we recommended that he begin walking with partial weightbearing using a foot-tibial orthosis with the range of motion set to 45° plantar flexion and 15° dorsiflexion. At 10 weeks postoperatively, he was encouraged to return to full weightbearing on both feet. Beginning rehabilitation as soon as possible after minimally invasive surgery, compared with 6 weeks of immobilization after surgery, provided a rapid resumption to full weightbearing. We emphasize the clinical importance of a safe, simple treatment program that can be followed for a patient with damage to the Achilles tendons. To our knowledge, ours is the first report of minimally invasive repair of bilateral simultaneous traumatic rupture of the Achilles tendon. PMID:26002678
Directory of Open Access Journals (Sweden)
Mika Oki
2011-10-01
Full Text Available BACKGROUND: Dengue infection is endemic in many regions throughout the world. While insecticide fogging targeting the vector mosquito Aedes aegypti is a major control measure against dengue epidemics, the impact of this method remains controversial. A previous mathematical simulation study indicated that insecticide fogging minimized cases when conducted soon after peak disease prevalence, although the impact was minimal, possibly because seasonality and population immunity were not considered. Periodic outbreak patterns are also highly influenced by seasonal climatic conditions. Thus, these factors are important considerations when assessing the effect of vector control against dengue. We used mathematical simulations to identify the appropriate timing of insecticide fogging, considering seasonal change of vector populations, and to evaluate its impact on reducing dengue cases with various levels of transmission intensity. METHODOLOGY/PRINCIPAL FINDINGS: We created the Susceptible-Exposed-Infectious-Recovered (SEIR model of dengue virus transmission. Mosquito lifespan was assumed to change seasonally and the optimal timing of insecticide fogging to minimize dengue incidence under various lengths of the wet season was investigated. We also assessed whether insecticide fogging was equally effective at higher and lower endemic levels by running simulations over a 500-year period with various transmission intensities to produce an endemic state. In contrast to the previous study, the optimal application of insecticide fogging was between the onset of the wet season and the prevalence peak. Although it has less impact in areas that have higher endemicity and longer wet seasons, insecticide fogging can prevent a considerable number of dengue cases if applied at the optimal time. CONCLUSIONS/SIGNIFICANCE: The optimal timing of insecticide fogging and its impact on reducing dengue cases were greatly influenced by seasonality and the level of
DEVELOPMENT OF OPTIMIZATION STRATEGIES COMBINING RANDOM AND DETERMINISTIC METHODS
Directory of Open Access Journals (Sweden)
Mimoun Younes
2012-01-01
Full Text Available The optimal allocation of powers is one of main functions of the manufacturing operation and control of electrical energy. The overall objective is to determine optimal production units in order to minimize production cost while the system operates in its safe limit. This article proposes a hybridization between deterministic and stochastic approaches (Method of Davidon-Fletcher-Powell and genetics to improve the optimization of the cost function nodes.
Minimal access surgery in Castleman disease in a child, a case report
Directory of Open Access Journals (Sweden)
Jan F. Svensson
2015-07-01
Full Text Available This case report describes a child with Castleman disease. We present an overview of the disease, the investigation leading to the diagnosis, the laparoscopic approach for surgical treatment and the follow up. This rare entity must be considered in cases of long-standing abdominal pain, cross-sectional imaging is beneficial and we support the use of laparoscopic intervention in the treatment of unifocal abdominal Castleman disease.
Case study: a minimally invasive approach to the treatment of Klippel-Trenaunay syndrome.
Latessa, Victoria; Frasier, Krista
2007-12-01
Klippel-Trenaunay syndrome (KTS) is a congenital developmental disorder characterized by port wine stain, venous abnormalities, soft tissue, and bony deformities of the affected extremity. It is usually diagnosed in early childhood and has many long-term sequelae. Patients not only have physical health problems but also must learn to cope with psychosocial factors that will affect their self-esteem and interpersonal relationships. This article describes the syndrome of KTS and the minimally invasive techniques used in the treatment of superficial varicosities in patients with reasonably mild KTS with an intact deep venous system. Treating the varicosities relatively early to avoid the long-term complications of chronic venous insufficiency may improve the quality of life, maintain limb function, and decrease the risk of long-term venous complications. PMID:18036494
International Nuclear Information System (INIS)
Monte Carlo methods are typically used for simulating radiation fields around gamma-ray spectrometers and pulse-height tallies within those spectrometers. Deterministic codes that discretize the linear Boltzmann transport equation can offer significant advantages in computational efficiency for calculating radiation fields, but stochastic codes remain the most dependable tools for calculating the response within spectrometers. For a deterministic field solution to become useful to radiation detection analysts, it must be coupled to a method for calculating spectrometer response functions. This coupling is done in the RADSAT toolbox. Previous work has been successful using a Monte Carlo boundary sphere around a handheld detector. It is desirable to extend this coupling to larger detector systems such as the portal monitors now being used to screen vehicles crossing borders. Challenges to providing an accurate Monte Carlo boundary condition from the deterministic field solution include the greater possibility of large radiation gradients along the detector and the detector itself perturbing the field solution, unlike smaller detector systems. The method of coupling the deterministic results to a stochastic code for large detector systems can be described as spatially defined rectangular patches that minimize gradients. The coupled method was compared to purely stochastic simulation data of identical problems, showing the methods produce consistent detector responses while the purely stochastic run times are substantially longer in some cases, such as highly shielded geometries. For certain cases, this method has the ability to faithfully emulate large sensors in a more reasonable amount of time than other methods.
Deterministic Soluble Model of Coarsening
Frachebourg, L.; Krapivsky, P. L.
1996-01-01
We investigate a 3-phase deterministic one-dimensional phase ordering model in which interfaces move ballistically and annihilate upon colliding. We determine analytically the autocorrelation function A(t). This is done by computing generalized first-passage type probabilities P_n(t) which measure the fraction of space crossed by exactly n interfaces during the time interval (0,t), and then expressing the autocorrelation function via P_n's. We further reveal the spatial structure of the syste...
Analysis of FBC deterministic chaos
Energy Technology Data Exchange (ETDEWEB)
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
Deterministic Small-World Networks
Comellas, Francesc; Sampels, Michael
2001-01-01
Many real life networks, such as the World Wide Web, transportation systems, biological or social networks, achieve both a strong local clustering (nodes have many mutual neighbors) and a small diameter (maximum distance between any two nodes). These networks have been characterized as small-world networks and modeled by the addition of randomness to regular structures. We show that small-world networks can be constructed in a deterministic way. This exact approach permits a direct calculatio...
Institute of Scientific and Technical Information of China (English)
Yun Niu; Tieju Liu; Xuchen Cao; Xiumin Ding; Li Wei; Yuxia Gao; Jun Liu
2009-01-01
OBJECTIVE To evaluate core needle biopsy (CNB) as a mini-mally invasive method to examine breast lesions and discuss the clinical significance of subsequent immunohistochemistry (IHC)analysis.METHODS The clinical data and pathological results of 235 pa-tients with breast lesions, who Received CNB before surgery, were analyzed and compared. Based on the results of CNB done before surgery, 87 out of 204 patients diagnosed as invasive carcinoma were subjected to immunodetection for p53, c-erbB-2, ER and PR.The morphological change of cancer tissues in response to chemo-therapy was also evaluated.RESULTS In total of 235 cases receiving CNB examination, 204 were diagnosed as invasive carcinoma, reaching a 100% consistent rate with the surgical diagnosis. Sixty percent of the cases diag-nosed as non-invasive carcinoma by CNB was identified to have the presence of invading elements in surgical specimens, and simi-larly, 50% of the cases diagnosed as atypical ductal hyperplasia by CNB was confirmed to be carcinoma by the subsequent result of excision biopsy. There was no significant difference between the CNB biopsy and regular surgical samples in positive rate of im-munohistochemistry analysis (p53, c-erbB-2, ER and PR; P > 0.05).However, there was significant difference in the expression rate of p53 and c-erbB-2 between the cases with and without morphologi-cal change in response to chemotherapy (P < 0.05). In most cases with p53 and c-erbB-2 positive, there was no obvious morphologi-cal change after chemotherapy. CONCLUSION CNB is a cost-effective diagnostic method with minimal invasion for breast lesions, although it still has some limi-tations. Immunodetection on CNB tissue is expected to have great significance in clinical applications.
Directory of Open Access Journals (Sweden)
Reddy
2015-07-01
Full Text Available CONTEXT: The approximate incidence of periprosthetic supracondylar femur fractures after total knee arthroplasty ranges from 0.3 to 2.5 percent. Various methods of treatment of these fractures have been suggested in the past, such as conservative management, open reduction and plate fixation and intramedullary nailing. However, there were complications like pain, stiffness, infection and delayed union. Minimally invasive plate osteosynthesis (MIPO is a relatively newer technique in the treatment of distal femoral fractures, as it preserves the periosteal blood supply an d bone perfusion as well as minimizes soft tissue dissection. AIM: To evaluate the effectiveness of MIPO technique in the treatment of periprosthetic distal femoral fracture. SETTINGS AND DESIGN : In this study, we present a case report of a 54 year old female patient who sustained type 2 (Rorabeck et al. classification periprosthetic distal femoral fractures after TKA. Her fracture fixation was done with distal femoral locking plates using minimally invasive technique. METHODS AND MATERIAL : We evaluated the clinical (using Oxford knee scoring system and radiological outcomes of the patient till six months post - operatively. Radiologically, the fracture showed complete union and she regained her full range of knee motion by the end of three months. CONCLUSION: We conclude that MIPO can be considered as an effective surgical treatment option in the management of periprosthetic distal femoral fractures after TKA
Recovery From Vegetative State to Minimally Conscious State: A Case Report.
Jang, SungHo; Kim, SeongHo; Lee, HanDo
2016-05-01
In this study, we attempted to demonstrate the change of the ascending reticular activating system (ARAS) concurrent with the recovery from a vegetative state (VS) to a minimally conscious state (MCS) in a patient with brain injury. A 54-year-old male patient had suffered from head trauma and underwent cardiopulmonary resuscitation immediately after head trauma. At 10 months after onset, the patient exhibited impaired consciousness, with a Coma Recovery Scale-Revised (CRS-R) score of 7 (auditory function: 1, visual function: 2, motor function: 1, verbal function: 1, communication: 0, and arousal: 2) and underwent the ventriculoperitoneal shunt operation for hydrocephalus. After the operation, he began comprehensive rehabilitative therapy. At post-op 2 and 8 weeks, his CRS-R score had recovered to 15 (3/3/4/1/1/3) and 17 (3/3/4/2/2/3), respectively. In terms of configuration on diffusion tensor tractography (DTT), there was no significant change in the lower portion of the ARAS. Regarding the change of neural connectivity of the thalamic intralaminar nucleus, increased neural connectivities to the hypothalamus, basal forebrain, prefrontal cortex, anterior cingulate cortex, and parietal cortex were observed in both hemispheres on post-op DTTs compared with pre-op DTT. We report on a patient with brain injury who showed change of the ARAS concurrent with the recovery from a VS and a MCS. PMID:26829084
Streamflow disaggregation: a nonlinear deterministic approach
Directory of Open Access Journals (Sweden)
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Deterministic prediction of localized corrosion damage
International Nuclear Information System (INIS)
The accumulation of damage due to localized corrosion [pitting, stress corrosion cracking (SCC), corrosion fatigue (CF), crevice corrosion (CC), and erosion-corrosion (EC)] in complex industrial systems, such as power plants, refineries, desalination systems, etc., poses a threat to continued safe and economic operation, primarily because of the sudden, catastrophic nature of the resulting failures. Of particular interest in managing these forms of damage is the development of robust algorithms that can be used to predict the integrated damage as a function of time and as a function of the operating conditions of the system. Because complex systems of the same design rapidly become unique, due to differences in operating histories, and because failures are rare events, there is generally insufficient data on any given system to derive reliable empirical models that capture the impact of all (or even some) of the important independent variables. Accordingly, the models should be, to the greatest extent possible, deterministic with the output being constrained by the natural laws. In this paper, I outline the theory of the initiation of damage, in the from of pitting on aluminum in chloride solution, and then describe the deterministic prediction of the accumulation of damage from SCC in Type 304 SS components in the primary coolant circuits of Boiling Water (Nuclear) Reactors (BWRs). These cases have been selected to illustrate the various phases through which localized corrosion damage occurs
Chaotic dynamics and control of deterministic ratchets
International Nuclear Information System (INIS)
Deterministic ratchets, in the inertial and also in the overdamped limit, have a very complex dynamics, including chaotic motion. This deterministically induced chaos mimics, to some extent, the role of noise, changing, on the other hand, some of the basic properties of thermal ratchets; for example, inertial ratchets can exhibit multiple reversals in the current direction. The direction depends on the amount of friction and inertia, which makes it especially interesting for technological applications such as biological particle separation. We overview in this work different strategies to control the current of inertial ratchets. The control parameters analysed are the strength and frequency of the periodic external force, the strength of the quenched noise that models a non-perfectly-periodic potential, and the mass of the particles. Control mechanisms are associated with the fractal nature of the basins of attraction of the mean velocity attractors. The control of the overdamped motion of noninteracting particles in a rocking periodic asymmetric potential is also reviewed. The analysis is focused on synchronization of the motion of the particles with the external sinusoidal driving force. Two cases are considered: a perfect lattice without disorder and a lattice with noncorrelated quenched noise. The amplitude of the driving force and the strength of the quenched noise are used as control parameters
Bottleneck Paths and Trees and Deterministic Graphical Games
Chechik, Shiri; Kaplan, Haim; Thorup, Mikkel; Zamir, Or; Zwick, Uri
2016-01-01
Gabow and Tarjan showed that the Bottleneck Path (BP) problem, i.e., finding a path between a given source and a given target in a weighted directed graph whose largest edge weight is minimized, as well as the Bottleneck spanning tree (BST) problem, i.e., finding a directed spanning tree rooted at a given vertex whose largest edge weight is minimized, can both be solved deterministically in O(m * log^*(n)) time, where m is the number of edges and n is the number of vertices in the graph. We p...
A Mathematical Programming Approach to a Deterministic Kanban System
Gabriel R. Bitran; Li Chang
1987-01-01
In this paper we present a mathematical programming model for the Kanban system in a deterministic multi-stage capacitated assembly-tree-structure production setting. We discuss solution procedures to the problem and address three special cases of practical interest.
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
Minimal invasive surgery for unicameral bone cyst using demineralized bone matrix: a case series
Directory of Open Access Journals (Sweden)
Cho Hwan
2012-07-01
Full Text Available Abstract Background Various treatments for unicameral bone cyst have been proposed. Recent concern focuses on the effectiveness of closed methods. This study evaluated the effectiveness of demineralized bone matrix as a graft material after intramedullary decompression for the treatment of unicameral bone cysts. Methods Between October 2008 and June 2010, twenty-five patients with a unicameral bone cyst were treated with intramedullary decompression followed by grafting of demineralized bone matrix. There were 21 males and 4 female patients with mean age of 11.1 years (range, 3–19 years. The proximal metaphysis of the humerus was affected in 12 patients, the proximal femur in five, the calcaneum in three, the distal femur in two, the tibia in two, and the radius in one. There were 17 active cysts and 8 latent cysts. Radiologic change was evaluated according to a modified Neer classification. Time to healing was defined as the period required achieving cortical thickening on the anteroposterior and lateral plain radiographs, as well as consolidation of the cyst. The patients were followed up for mean period of 23.9 months (range, 15–36 months. Results Nineteen of 25 cysts had completely consolidated after a single procedure. The mean time to healing was 6.6 months (range, 3–12 months. Four had incomplete healing radiographically but had no clinical symptom with enough cortical thickness to prevent fracture. None of these four cysts needed a second intervention until the last follow-up. Two of 25 patients required a second intervention because of cyst recurrence. All of the two had a radiographical healing of cyst after mean of 10 additional months of follow-up. Conclusions A minimal invasive technique including the injection of DBM could serve as an excellent treatment method for unicameral bone cysts.
Deterministic Circular Self Test Path
Institute of Scientific and Technical Information of China (English)
WEN Ke; HU Yu; LI Xiaowei
2007-01-01
Circular self test path (CSTP) is an attractive technique for testing digital integrated circuits(IC) in the nanometer era, because it can easily provide at-speed test with small test data volume and short test application time. However, CSTP cannot reliably attain high fault coverage because of difficulty of testing random-pattern-resistant faults. This paper presents a deterministic CSTP (DCSTP) structure that consists of a DCSTP chain and jumping logic, to attain high fault coverage with low area overhead. Experimental results on ISCAS'89 benchmarks show that 100% fault coverage can be obtained with low area overhead and CPU time, especially for large circuits.
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
International Nuclear Information System (INIS)
Objective: To evaluate the benefits, efficacy and safety of local cervical plexus block in the performance of carotid endarterectomy, in the absence of sophisticated cerebral perfusion monitoring. Place and Duration of Study: This study was carried out at Combined Military Hospital (CMH) Lahore, Pakistan from January 2012 to May 2013. Study Design: Quasi-experimental study. Patients and Methods: A total of 45 cases of ASA II and ASA III physical status were operated for carotid endarterectomy under local block of cervical plexus. After thorough preanaesthetic assessment, the patients physical conditions were optimized before surgery. Premedication was given with midazolam and sedated during operation with small doses of propofol. Local anaesthesia (LA) was completed by injecting bupivacaine in cervical plexuses 2, C3 and C4 areas. During operation vital signs and adequacy of cerebral perfusion were monitored by keeping the patient awake and making clinical neurological observations. Verbal contact was maintained with the patient. Breathing patterns and motor power were assessed in contralateral upper and lower limbs. Postoperatively patients were interviewed and analgesia during operation was assessed with visual analogue scale. Surgeon's satisfaction regarding intraoperative analgesia was also noted. Patients who required added sedation or local anesthetic agent were also noted. Average duration of surgery time was two hours and average stay of the patients in hospital was five days. Results: Out of 45 patients, 37 patients (82%) had smooth and comfortable anaesthesia and analgesia. In only 1 patient (2.2%) LA had to be converted into general anaesthesia (GA). In 3 cases (7%) LA was supplemented. One patient (2.2%) developed hoarseness and difficulty in breathing and 1 patient (2.2%) developed hemiparesis intra-operatively; while 1 patient (2.2%) developed hypotension in the immediate postoperative period. One patient (2.2%) developed haematoma at infiltration
DEFF Research Database (Denmark)
Bertl, Kristina; Gotfredsen, Klaus; Jensen, Simon S;
2016-01-01
OBJECTIVES: To report two cases of adverse reaction after mucosal hyaluronan (HY) injection around implant-supported crowns, with the aim to augment the missing interdental papilla. MATERIAL AND METHODS: Two patients with single, non-neighbouring, implants in the anterior maxilla, who were treated...... within the frames of a randomized controlled clinical trial testing the effectiveness of HY gel injection to reconstruct missing papilla volume at single implants, presented an adverse reaction. Injection of HY was performed bilaterally using a 3-step technique: (i) creation of a reservoir in the mucosa...... directly above the mucogingival junction, (ii) injection into the attached gingiva/mucosa below the missing papilla, and (iii) injection 2-3 mm apically to the papilla tip. The whole-injection session was repeated once after approximately 4 weeks. RESULTS: Both patients presented with swelling and extreme...
Central limit behavior of deterministic dynamical systems
Tirnakli, Ugur; Beck, Christian; Tsallis, Constantino
2007-04-01
We investigate the probability density of rescaled sums of iterates of deterministic dynamical systems, a problem relevant for many complex physical systems consisting of dependent random variables. A central limit theorem (CLT) is valid only if the dynamical system under consideration is sufficiently mixing. For the fully developed logistic map and a cubic map we analytically calculate the leading-order corrections to the CLT if only a finite number of iterates is added and rescaled, and find excellent agreement with numerical experiments. At the critical point of period doubling accumulation, a CLT is not valid anymore due to strong temporal correlations between the iterates. Nevertheless, we provide numerical evidence that in this case the probability density converges to a q -Gaussian, thus leading to a power-law generalization of the CLT. The above behavior is universal and independent of the order of the maximum of the map considered, i.e., relevant for large classes of critical dynamical systems.
Ooi, Adrian; Ng, Jonathan; Chui, Christopher; Goh, Terence; Tan, Bien Keem
2016-01-01
Background. Injuries to the elbow have led to consequences varying from significant limitation in function to loss of the entire upper limb. Soft tissue reconstruction with durable and pliable coverage balanced with the ability to mobilize the joint early to optimize rehabilitation outcomes is paramount. Methods. Methods of flap reconstruction have evolved from local and pedicled flaps to perforator-based flaps and free tissue transfer. Here we performed a review of 20 patients who have undergone flap reconstruction of the elbow at our institution. Discussion. 20 consecutive patients were identified and included in this study. Flap types include local (n = 5), regional pedicled (n = 7), and free (n = 8) flaps. The average size of defect was 138 cm2 (range 36–420 cm2). There were no flap failures in our series, and, at follow-up, the average range of movement of elbow flexion was 100°. Results. While the pedicled latissimus dorsi flap is the workhorse for elbow soft tissue coverage, advancements in microvascular knowledge and surgery have brought about great benefit, with the use of perforator flaps and free tissue transfer for wound coverage. Conclusion. We present here our case series on elbow reconstruction and an abbreviated algorithm on flap choice, highlighting our decision making process in the selection of safe flap choice for soft tissue elbow reconstruction. PMID:27313886
Piecewise deterministic Markov processes: an analytic approach
Alkurdi, Taleb Salameh Odeh
2013-01-01
The subject of this thesis, piecewise deterministic Markov processes, an analytic approach, is on the border between analysis and probability theory. Such processes can either be viewed as random perturbations of deterministic dynamical systems in an impulsive fashion, or as a particular kind of stochastic process in continuous time in which parts of the sample trajectories are deterministic. Accordingly, questions concerning theses processes may be approached starting from either side. The a...
Directory of Open Access Journals (Sweden)
Minas D. Leventis
2016-01-01
Full Text Available Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP granules coated with poly(lactic-co-glycolic acid (PLGA were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques.
Leventis, Minas D.; Fairbairn, Peter; Kakar, Ashish; Leventis, Angelos D.; Margaritis, Vasileios; Lückerath, Walter; Horowitz, Robert A.; Rao, Bappanadu H.; Lindner, Annette; Nagursky, Heiner
2016-01-01
Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP) granules coated with poly(lactic-co-glycolic acid) (PLGA) were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone) in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material) while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques. PMID:27190516
Weidenbach, C.
1994-01-01
Minimal resolution restricts the applicability of resolution and factorization to minimal literals. Minimality is an abstract criterion. It is shown that if the minimality criterion satisfies certain properties minimal resolution is sound and complete. Hyper resolution, ordered resolution and lock resolution are known instances of minimal resolution. We also introduce new instances of the general completeness result, correct some mistakes in existing literature and give some general redundanc...
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
International Nuclear Information System (INIS)
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
The Deterministic Dendritic Cell Algorithm
Greensmith, Julie
2010-01-01
The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to anal- yse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm.
Survivability of Deterministic Dynamical Systems.
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Zhang, Z. J.; Man, Z. X.
2004-01-01
Several theoretical Deterministic Secure Direct Bidirectional Communication protocols are generalized to improve their capacities by introducing the superdense-coding in the case of high-dimension quantum states.
Directory of Open Access Journals (Sweden)
Savaş Karyağar
2013-04-01
Full Text Available Objective: In this study, our aim was to study the efficiency of gamma probe guided minimally invasive parathyroidectomy (GP-MIP, conducted without the intra-operative quick parathyroid hormone (QPTH measurement in the cases of solitary parathyroid adenomas (SPA detected with USG and dual phase 99mTc-MIBI parathyroid scintigraphy (PS in the preoperative period. Material and Methods: This clinical study was performed in 31 SPA patients (27 female, 4 male; mean age 51±11years between February 2006 and January 2009. All patients were operated within 30 days after the detection of the SPA with dual phase 99mTc-MIBI PS and USG. The GP-MIP was done 90-120 min after the iv injection of 740 MBq 99mTc-MIBI. In all cases, except 1 patient, the GP-MIP was performed under local anesthesia; due to the enormity of size of SPA, then general anesthesia is chosen. Results: The operation time was 30-60 min, mean 38,2±7 min. In the first postoperative day, there was a more than 50% decrease in PTH levels in all patients and all but one had normal serum calcium levels. Transient hypocalcemia was detected in one patient. Conclusion: GP-MIP without intra-operative QPTH measurement is a suitable method in the surgical treatment of SPA detected by dual phase 99mTc-MIBI PS and USG.
Piecewise deterministic Markov processes : an analytic approach
Alkurdi, Taleb Salameh Odeh
2013-01-01
The subject of this thesis, piecewise deterministic Markov processes, an analytic approach, is on the border between analysis and probability theory. Such processes can either be viewed as random perturbations of deterministic dynamical systems in an impulsive fashion, or as a particular kind of sto
Control rod worth calculations using deterministic and stochastic methods
Energy Technology Data Exchange (ETDEWEB)
Varvayanni, M. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Savva, P., E-mail: melina@ipta.demokritos.g [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Catsaros, N. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece)
2009-11-15
Knowledge of the efficiency of a control rod to absorb excess reactivity in a nuclear reactor, i.e. knowledge of its reactivity worth, is very important from many points of view. These include the analysis and the assessment of the shutdown margin of new core configurations (upgrade, conversion, refuelling, etc.) as well as several operational needs, such as calibration of the control rods, e.g. in case that reactivity insertion experiments are planned. The control rod worth can be assessed either experimentally or theoretically, mainly through the utilization of neutronic codes. In the present work two different theoretical approaches, i.e. a deterministic and a stochastic one are used for the estimation of the integral and the differential worth of two control rods utilized in the Greek Research Reactor (GRR-1). For the deterministic approach the neutronics code system SCALE (modules NITAWL/XSDRNPM) and CITATION is used, while the stochastic one is made using the Monte Carlo code TRIPOLI. Both approaches follow the procedure of reactivity insertion steps and their results are tested against measurements conducted in the reactor. The goal of this work is to examine the capability of a deterministic code system to reliably simulate the worth of a control rod, based also on comparisons with the detailed Monte Carlo simulation, while various options are tested with respect to the deterministic results' reliability.
Deterministic transformations of bipartite pure states
International Nuclear Information System (INIS)
Highlights: • A new method for deterministic bipartite entanglement transformation is presented. • Solution in lower dimensions is used to obtain transformation in higher dimensions. • Transformation of states in 3×3 dimensions by a single measurement is presented. • Transformation of states in n×n dimensions by three-outcome measurements is presented. - Abstract: We propose an explicit protocol for the deterministic transformations of bipartite pure states in any dimension using deterministic transformations in lower dimensions. As an example, explicit solutions for the deterministic transformations of 3⊗3 pure states by a single measurement are obtained, and an explicit protocol for the deterministic transformations of n⊗n pure states by three-outcome measurements is presented
Deterministic transformations of bipartite pure states
Energy Technology Data Exchange (ETDEWEB)
Torun, Gokhan, E-mail: torung@itu.edu.tr; Yildiz, Ali, E-mail: yildizali2@itu.edu.tr
2015-01-23
Highlights: • A new method for deterministic bipartite entanglement transformation is presented. • Solution in lower dimensions is used to obtain transformation in higher dimensions. • Transformation of states in 3×3 dimensions by a single measurement is presented. • Transformation of states in n×n dimensions by three-outcome measurements is presented. - Abstract: We propose an explicit protocol for the deterministic transformations of bipartite pure states in any dimension using deterministic transformations in lower dimensions. As an example, explicit solutions for the deterministic transformations of 3⊗3 pure states by a single measurement are obtained, and an explicit protocol for the deterministic transformations of n⊗n pure states by three-outcome measurements is presented.
Constructing stochastic models from deterministic process equations by propensity adjustment
Directory of Open Access Journals (Sweden)
Wu Jialiang
2011-11-01
Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht
2010-01-01
Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently
Linear systems control deterministic and stochastic methods
Hendricks, Elbert; Sørensen, Paul Haase
2008-01-01
Linear Systems Control provides a very readable graduate text giving a good foundation for reading more rigorous texts. There are multiple examples, problems and solutions. This unique book successfully combines stochastic and deterministic methods.
Cell sorting by deterministic cell rolling
Choi, Sungyoung; Karp, Jeffrey M.; Karnik, Rohit
2011-01-01
This communication presents the concept of “deterministic cell rolling”, which leverages transient cell-surface molecular interactions that mediate cell rolling to sort cells with high purity and efficiency in a single step.
Regret Bounds for Deterministic Gaussian Process Bandits
De Freitas, Nando; Smola, Alex; Zoghi, Masrour
2012-01-01
This paper analyses the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of $O(\\frac{1}{\\sqrt{t}})$, where t is the number of observations. To complement their result, we attack the deterministic c...
A Deterministic and Polynomial Modified Perceptron Algorithm
Directory of Open Access Journals (Sweden)
Olof Barr
2006-01-01
Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.
Deterministic algorithm with agglomerative heuristic for location problems
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
Comparison of Deterministic and Stochastic Models of the lac Operon Genetic Network
Stamatakis, M.; Mantzaris, N. V.
2009-01-01
The lac operon has been a paradigm for genetic regulation with positive feedback, and several modeling studies have described its dynamics at various levels of detail. However, it has not yet been analyzed how stochasticity can enrich the system's behavior, creating effects that are not observed in the deterministic case. To address this problem we use a comparative approach. We develop a reaction network for the dynamics of the lac operon genetic switch and derive corresponding deterministic...
Linear Finite-Field Deterministic Networks With Many Sources and One Destination
Butt, M. Majid; Caire, Giuseppe; Müller, Ralf R.
2010-01-01
We find the capacity region of linear finite-field deterministic networks with many sources and one destination. Nodes in the network are subject to interference and broadcast constraints, specified by the linear finite-field deterministic model. Each node can inject its own information as well as relay other nodes' information. We show that the capacity region coincides with the cut-set region. Also, for a specific case of correlated sources we provide necessary and sufficient conditions for...
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Linear embedding of free energy minimization
Moussa, Jonathan E.
2016-01-01
Exact free energy minimization is a convex optimization problem that is usually approximated with stochastic sampling methods. Deterministic approximations have been less successful because many desirable properties have been difficult to attain. Such properties include the preservation of convexity, lower bounds on free energy, and applicability to systems without subsystem structure. We satisfy all of these properties by embedding free energy minimization into a linear program over energy-r...
Deterministic finishing of aspheric optical components
Lambropoulos, Teddy; Fess, Ed; DeFisher, Scott
2013-09-01
Manufacturing aspheric optics can present challenges depending on the complexity of their shape. This is especially true during the finishing stage. To tackle this challenge, OptiPro Systems has developed two technologies for deterministic optical polishing: UltraForm Finishing (UFF) and UltraSmooth Finishing (USF). UFF is a deterministic sub aperture polishing process that polishes spherical, aspheric, and free form surface geometries. In contrast, the USF process is a deterministic mid to large size aperture polishing process that works with a conforming lap. These two technologies have the ability to tackle a wide range of optical shapes by removing sub-surface damage, removing various mid-spatial frequency artifacts that might be left from a grinding process, and correct the optic's figure error in a controlled fashion. This presentation will describe these technologies, present performance information as to their capabilities, and show how OptiPro is developing these technologies to push the state of the art in manufacturing.
Effect of Uncertainty on Deterministic Runway Scheduling
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Exploiting Deterministic TPG for Path Delay Testing
Institute of Scientific and Technical Information of China (English)
李晓维
2000-01-01
Detection of path delay faults requires two-pattern tests. BIST technique provides a low-cost test solution. This paper proposes an approach to designing a cost-effective deterministic test pattern generator (TPG) for path delay testing. Given a set of pre-generated test-pairs with pre-determined fault coverage, a deterministic TPG is synthesized to apply the given test-pair set in a limited test time. To achieve this objective, configurable linear feedback shift register (LFSR) structures are used. Techniques are developed to synthesize such a TPG, which is used to generate an unordered deterministic test-pair set. The resulting TPG is very efficient in terms of hardware size and speed performance. Simulation of academic benchmark circuits has given good results when compared to alternative solutions.
Nine challenges for deterministic epidemic models
Directory of Open Access Journals (Sweden)
Mick Roberts
2015-03-01
Full Text Available Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern the uses and limitations of deterministic models as approximations to stochastic systems.
A new deterministic model for chaotic reversals
Gissinger, Christophe
2011-01-01
In this article, we present a new chaotic system of three coupled ordinary differential equations, limited to quadratic terms. A wide variety of dynamical regimes are reported. For some parameters, chaotic reversals of the amplitudes are produced by crisis-induced intermittency, following a mechanism different from what is generally observed in similar deterministic models. Despite its simplicity, this system therefore generates a rich dynamics, able to model more complex physical systems. In particular, a comparison with reversals of the magnetic field of the Earth shows a surprisingly good agreement, and highlights the relevance of deterministic chaos to describe geomagnetic field dynamics.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.;
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Stochastic versus deterministic systems of differential equations
Ladde, G S
2003-01-01
This peerless reference/text unfurls a unified and systematic study of the two types of mathematical models of dynamic processes-stochastic and deterministic-as placed in the context of systems of stochastic differential equations. Using the tools of variational comparison, generalized variation of constants, and probability distribution as its methodological backbone, Stochastic Versus Deterministic Systems of Differential Equations addresses questions relating to the need for a stochastic mathematical model and the between-model contrast that arises in the absence of random disturbances/flu
Deterministic doping and the exploration of spin qubits
Energy Technology Data Exchange (ETDEWEB)
Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Shock-induced explosive chemistry in a deterministic sample configuration.
Energy Technology Data Exchange (ETDEWEB)
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic doping and the exploration of spin qubits
International Nuclear Information System (INIS)
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures
Energy Technology Data Exchange (ETDEWEB)
Hwangbo, Soonho; Lee, In-Beum [POSTECH, Pohang (Korea, Republic of); Han, Jeehoon [University of Wisconsin-Madison, Madison (United States)
2014-10-15
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network.
International Nuclear Information System (INIS)
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network
Institute of Scientific and Technical Information of China (English)
Mohammadi Mohammad; Hossaini Mohammad Farouq; Mirzapour Bahman; Hajiantilaki Nabiollah
2015-01-01
In order to increase the safety of working environment and decrease the unwanted costs related to over-break in tunnel excavation projects, it is necessary to minimize overbreak percentage. Thus, based on regression analysis and fuzzy inference system, this paper tries to develop predictive models to estimate overbreak caused by blasting at the Alborz Tunnel. To develop the models, 202 datasets were utilized, out of which 182 were used for constructing the models. To validate and compare the obtained results, determination coefficient (R2) and root mean square error (RMSE) indexes were chosen. For the fuzzy model, R2 and RMSE are equal to 0.96 and 0.55 respectively, whereas for regression model, they are 0.41 and 1.75 respectively, proving that the fuzzy predictor performs, significantly, better than the statistical method. Using the developed fuzzy model, the percentage of overbreak was minimized in the Alborz Tunnel.
DEFF Research Database (Denmark)
Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto;
1974-01-01
The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...
Chaotic behaviour of deterministic systems
International Nuclear Information System (INIS)
In these Proceedings many dissipative as well as conservative systems are discussed. In some cases one would like to understand chaotic behavior in order to avoid it, e.g. for certain applications of mechanical and electrical engineering, celestial mechanics (satellite orbits), population dynamics, the storage rings of high energy physics, hydrodynamics, plasma physics (fusion), biophysics, etc. In other cases one would like to obtain chaotic behavior, e.g. for certain applications of classical (and quantum) statistical mechanics, hydrodynamics (turbulence), chemical kinetics, etc. Many diverse and notorious problems of Nonlinear Dynamics exist in one or another of those fields. The basic mathematical tools used in the study of chaotic behavior are introduced in the opening lecture. Chaotic behavior in conservative systems is discussed. For the simplest class of dissipative systems, 'Mappings of the Interval', a well developed theory is treated. More complicated dissipative systems with many degrees of freedom can often be reduced thanks to Bifurcation theory. The mathematical basis for chaotic behavior in difference- and differential equations is treated in detail. Finally, the lectures on full fledged real turbulence, show us how many outstanding problems still remain to be explained from first principles. Nevertheless it is exciting to detect progress on these old problems of chaotic behavior and see some agreement with experiment. (Auth.)
Topologically Ordered Graph Clustering via Deterministic Annealing
Rossi, Fabrice; Villa-Vialaneix, Nathalie
2009-01-01
This paper proposes an organized generalization of Newman and Girvan's modularity measure for graph clustering. Optimized via a deterministic annealing scheme, this measure produces topologically ordered graph partitions that lead to faithful and readable graph representations on a 2 dimensional SOM like planar grid.
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues
Deterministic Kalman filtering in a behavioral framework
Fagnani, F; Willems, JC
1997-01-01
The purpose of this paper is to obtain a deterministic version of the Kalman filtering equations. We will use a behavioral description of the plant, specifically, an image representation. The resulting algorithm requires a matrix spectral factorization. We also show that the filter can be implemente
DETERMINISTIC HOMOGENIZATION OF QUASILINEAR DAMPED HYPERBOLIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
Gabriel Nguetseng; Hubert Nnang; Nils Svanstedt
2011-01-01
Deterministic homogenization is studied for quasilinear monotone hyperbolic problems with a linear damping term.It is shown by the sigma-convergence method that the sequence of solutions to a class of multi-scale highly oscillatory hyperbolic problems converges to the solution to a homogenized quasilinear hyperbolic problem.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456
Spatial continuity measures for probabilistic and deterministic geostatistics
Energy Technology Data Exchange (ETDEWEB)
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
International Nuclear Information System (INIS)
The primary impediment that prevents nuclear proliferation is the lack of access to fissile materials. Thus, a recognized objective internationally has been to minimize the use of HEU and reduce the number of locations with HEU present. Yet, nearing the 30 year anniversary of this objective, the number of HEU-fuelled research facilities in operation remains high, HEU is still being used in large quantities, and significant quantities of HEU is still to be found in a large number of unsecured locations worldwide. This paper identifies the most important indicators for measuring progress for the historical and future national and international efforts for research reactor conversion and decommissioning of vulnerable facilities
Chu, Yi-Zen
2013-01-01
We show how, for certain classes of curved spacetimes, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d >= 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d+1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the "wave equation" perpendicular to the hyperboloid -- followed by a one dimensional integral -- needs to be evaluated. A t...
Drivelos, Spiros A; Danezis, Georgios P; Haroutounian, Serkos A; Georgiou, Constantinos A
2016-12-15
This study examines the trace and rare earth elemental (REE) fingerprint variations of PDO (Protected Designation of Origin) "Fava Santorinis" over three consecutive harvesting years (2011-2013). Classification of samples in harvesting years was studied by performing discriminant analysis (DA), k nearest neighbours (κ-NN), partial least squares (PLS) analysis and probabilistic neural networks (PNN) using rare earth elements and trace metals determined using ICP-MS. DA performed better than κ-NN, producing 100% discrimination using trace elements and 79% using REEs. PLS was found to be superior to PNN, achieving 99% and 90% classification for trace and REEs, respectively, while PNN achieved 96% and 71% classification for trace and REEs, respectively. The information obtained using REEs did not enhance classification, indicating that REEs vary minimally per harvesting year, providing robust geographical origin discrimination. The results show that seasonal patterns can occur in the elemental composition of "Fava Santorinis", probably reflecting seasonality of climate. PMID:27451177
Existence of optimal nonanticipating controls in piecewise deterministic control problems
Seierstad, Atle
2008-01-01
Abstract Optimal nonanticipating controls are shown to exist in nonautonomous piecewise deterministic control problems with hard terminal restrictions. The assumptions needed are completely analogous to those needed to obtain optimal controls in deterministic control problems. The proof is based on well-known results on existence of deterministic optimal controls.
Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy
Kanamori, H.
2014-12-01
Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.
Fan, Yong; Zhao, Yanhui; Pang, Lan; Kang, Yingxing; Kang, Boxiong; Liu, Yongyong; Fu, Jie; Xia, Bowei; Wang, Chen; Zhang, Youcheng
2016-01-01
Abstract Laparoscopic pancreatic surgery is one of the most sophisticated and advanced applications of laparoscopy in the current surgical practice. The adoption of laparoscopic pancreaticoduodenectomy (LPD) has been relatively slow due to the technical challenges. The aim of this study is to review and characterize our successful LPD experiences in patients with distal bile duct carcinoma, periampullary adenocarcinoma, pancreas head cancer, and duodenal cancer and evaluate the clinical outcomes of LPD for its potential in oncologic surgery applications. We retrospectively analyzed the clinical data from 14 patients who underwent LPD from August 2013 to February 2015 in our institute. We presented our LPD experience with no cases converted to open surgery in all 14 cases, which included 10 cases of laparoscopic digestive tract reconstruction and 4 cases of open digestive tract reconstructions. There were no deaths during the perioperative period and no case of gastric emptying disorder or postoperative bleeding. The other clinical indexes were comparable to or better than open surgery. Based on our experience, LPD could be potentially safe and feasible for the treatment of early pancreas head cancer, distal bile duct carcinoma, periampullary adenocarcinoma, and duodenal cancer. The master of LPD procedure requires technical expertise but it can be accomplished with a short learning curve. PMID:27124014
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Understanding deterministic diffusion by correlated random walks
International Nuclear Information System (INIS)
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Risk from deterministic effects of ionising radiation
International Nuclear Information System (INIS)
This publication provides a review of information for assessing deterministic effects on human health likely to arise from serious overexposure to ionising radiation. It updates information in previous Board publications NRPB-R226 and NRPB-M246. It constitutes a parallel document to Documents of the NRPB, 4, No. 4 (1993), which deals with stochastic effects. These two documents, together with Documents of the NRPB, 6, No. 1 (1995), which deals specifically with stochastic risk at low dose rates, give the current Board view on all health consequences of exposure to ionising radiation. Little new primary information on deterministic effects has become available in recent years. However, advances in techniques for data analysis have been made and are incorporated in the present report. These are presented in a form suitable for use in modelling the consequences to populations of serious radiological incidents. (author)
Microscopy with a Deterministic Single Ion Source
Jacob, Georg; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T; Poschinger, Ulrich G; Schmidt-Kaler, Ferdinand; Singer, Kilian
2015-01-01
We realize a single particle microscope by using deterministically extracted laser cooled $^{40}$Ca$^+$ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions with a resolution of 5.8$\\;\\pm\\;$1.0$\\,$nm and a minimum two-sample deviation of the beam position of 1.5$\\,$nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to much higher signal to noise ratios as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1$\\,\\mu$m circular hole structure to an accuracy of 2.7$\\,$nm using only 579 probe particles.
Formal validation of a deterministic MAC protocol
Godary-Dejean K.; Andreu D.
2013-01-01
This article deals with the formal validation of a medium access protocol. This protocol has been designed to meet the specific requirements of an implantable network-based neuroprosthese. This article presents the modeling of STIMAP with Time Petri Nets (TPN), and the verification of the deterministic medium access it provides, using timed model checking. Doing so, we show that existent formal methods and tools are not perfectly suitable for the validation of real system, espe- cially when s...
Deterministic quantum teleportation between distant atomic objects
Krauter, H.; D Salart; Muschik, C. A.; Petersen, J. M.; Shen, Heng; Fernholz, T.; Polzik, E. S.
2013-01-01
Quantum teleportation is a key ingredient of quantum networks and a building block for quantum computation. Teleportation between distant material objects using light as the quantum information carrier has been a particularly exciting goal. Here we demonstrate a new element of the quantum teleportation landscape, the deterministic continuous variable (cv) teleportation between distant material objects. The objects are macroscopic atomic ensembles at room temperature. Entanglement required for...
Deterministic MST Sparsification in the Congested Clique
Korhonen, Janne H.
2016-01-01
We give a simple deterministic constant-round algorithm in the congested clique model for reducing the number of edges in a graph to $n^{1+\\varepsilon}$ while preserving the minimum spanning forest, where $\\varepsilon > 0$ is any constant. This implies that in the congested clique model, it is sufficient to improve MST and other connectivity algorithms on graphs with slightly superlinear number of edges to obtain a general improvement. As a byproduct, we also obtain a simple alternative proof...
Deterministic definition of the capital risk
Anna Szczypinska; Piotrowski, Edward W.
2008-01-01
In this paper we propose a look at the capital risk problem inspired by deterministic, known from classical mechanics, problem of juggling. We propose capital equivalents to the Newton's laws of motion and on this basis we determine the most secure form of credit repayment with regard to maximisation of profit. Then we extend the Newton's laws to models in linear spaces of arbitrary dimension with the help of matrix rates of return. The matrix rates describe the evolution of multidimensional ...
Deterministically Deterring Timing Attacks in Deterland
Wu, Weiyi; Ford, Bryan
2015-01-01
The massive parallelism and resource sharing embodying today's cloud business model not only exacerbate the security challenge of timing channels, but also undermine the viability of defenses based on resource partitioning. We propose hypervisor-enforced timing mitigation to control timing channels in cloud environments. This approach closes "reference clocks" internal to the cloud by imposing a deterministic view of time on guest code, and uses timing mitigators to pace I/O and rate-limit po...
International Nuclear Information System (INIS)
This paper analyzes how to measure progress in the minimization of HEU-fueled research reactors with respect to the International Fuel Cycle Evaluation (INFCE) completed in 1978, and the establishment of new objectives towards 2020. All HEU-fueled research facilities converted, commissioned or decommissioned after 1978, in total more than 310 facilities, are included. More than 130 HEU-fuelled facilities still remain in operation today. The most important measure has been facility shut-down, accounting for 62% of the reduction in U-235 consumption from 1978 to 2007. Presently, only three regions worldwide use significant amounts of HEU; North-America, Russia with the Newly Independent States, and Europe. Projected HEU consumption in 2020 will drop to less 50 kg as the current HEU-fueled steady-state reactors are shut-down or converted. However. if the current lack of concern for HEU in life-time cores is not changed, in particular in Russia, 50-100 such facilities may continue to be in operation in 2020. (author)
Energy Technology Data Exchange (ETDEWEB)
Chu, Yi-Zen [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2014-09-15
Motivated by the desire to understand the causal structure of physical signals produced in curved spacetimes – particularly around black holes – we show how, for certain classes of geometries, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d ≥ 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d + 1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the “wave equation” perpendicular to the hyperboloid – followed by a one-dimensional integral – needs to be evaluated. A topological obstruction to the general construction is also discussed by utilizing it to derive a generalized Green's function of the Laplacian on the (d ≥ 2)-dimensional sphere.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Energy Technology Data Exchange (ETDEWEB)
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Using deterministic codes to accelerate continuous energy Monte-Carlo standards calculations
International Nuclear Information System (INIS)
Deterministic codes are usually used for critical parameters or one dimension geometry calculations. Advantages of the use of deterministic codes are speed of the calculation and the absence of standard deviation on the keff results. Nevertheless, the deterministic results are affected by several intrinsic uncertainties as energetic condensation or self-shielding. So the way to proceed at CEA expert criticality group (CEA/SERMA/CP2C) is to always check the main results (minimum critical or maximal permissible values and un-moderated values) with a punctual Monte Carlo calculation. These last years, in particular cases (pure actinide fissile media, exotic reflectors), large discrepancies have been observed between the keff calculated by the CRISTAL V1 route reference (continuous energy Monte Carlo code TRIPOLI-4) and the keff target (by the standard route APOLLO2-Sn). The problematic for these cases was how to transpose the keff discrepancies observed between standard and reference routes to the dimensions (mass, thickness...) or how to reduce the keff discrepancies using optimized options of the deterministic code. One solution to transpose discrepancies is to iterate on dimensions using a punctual Monte Carlo code to achieve the desired keff eigenvalue. But, the amount of time for obtaining a good standard deviation and also the desired keff eigenvalue inside the Monte Carlo calculation uncertainty can quickly increase. The principle of the method presented in this paper is that the discrepancy between deterministic code and Monte-Carlo code, calculated at the same dimension, is low variable with the dimension. Therefore, correcting the keff eigenvalue on which the deterministic code converge with the discrepancy observed, leads to a dimension nearer to the true dimension (i.e. the dimension where Monte-Carlo code keff calculation is close to the keff eigenvalue). If the keff eigenvalue is outside the Monte Carlo uncertainty, the discrepancy is recalculated and
Lin, Chia-Hsiang; Ma, Wing-Kin; Li, Wei-Chiang; Chi, Chong-Yung; Ambikapathi, ArulMurugan
2015-10-01
In blind hyperspectral unmixing (HU), the pure-pixel assumption is well-known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are verified by numerical simulations.
Farm-level nonparametric analysis of cost-minimization and profit-maximization behavior
Allen M. Featherstone; Moghnieh, Ghassan A.; Goodwin, Barry K.
1995-01-01
This study investigates non-parametrically the optimizing behavior of a sample of 289 Kansas farms under profit-maximization and cost-minimization hypotheses. The study uses both deterministic and stochastic non-parametric tests. The deterministic results do not support strict adherence to either optimization hypothesis. The stochastic tests suggest that all 289 farms fail the profit-maximization hypothesis, whereas 171 farms failed the cost-minimization hypothesis. Allowing for non-regressiv...
A deterministic method for transient, three-dimensional neutron transport
International Nuclear Information System (INIS)
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multi-dimensional neutronic systems
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
International Nuclear Information System (INIS)
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
International Nuclear Information System (INIS)
This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect
Hybrid method of deterministic and probabilistic approaches for multigroup neutron transport problem
International Nuclear Information System (INIS)
A hybrid method of deterministic and probabilistic methods is proposed to solve Boltzmann transport equation. The new method uses a deterministic method, Method of Characteristics (MOC), for the fast and thermal neutron energy ranges and a probabilistic method, Monte Carlo (MC), for the intermediate resonance energy range. The hybrid method, in case of continuous energy problem, will be able to take advantage of fast MOC calculation and accurate resonance self shielding treatment of MC method. As a proof of principle, this paper presents the hybrid methodology applied to a multigroup form of Boltzmann transport equation and confirms that the hybrid method can produce consistent results with MC and MOC methods. (authors)
Linear embedding of free energy minimization
Moussa, Jonathan E
2016-01-01
Exact free energy minimization is a convex optimization problem that is usually approximated with stochastic sampling methods. Deterministic approximations have been less successful because many desirable properties have been difficult to attain. Such properties include the preservation of convexity, lower bounds on free energy, and applicability to systems without subsystem structure. We satisfy all of these properties by embedding free energy minimization into a linear program over energy-resolved expectation values. Numerical results on small systems are encouraging, but a lack of size consistency necessitates further development for large systems.
Adachi, Koichi; Yamaguchi, Atsushi; Yuri, Koichi; Matsumoto, Harunobu; Kimura, Naoyuki; Okamura, Homare; Shiraishi, Manabu; Hori, Daijirou; Adachi, Hideo
2016-06-01
Standard full median sternotomy for total aortic arch replacement in patients with tracheostomy has higher risks for mediastinitis and graft infection. To avoid surgical site infection, it is necessary to keep a sufficient distance between the tracheostomy and the site of surgical skin incision. We herein report a case of a 74-year-old man with permanent tracheostomy after total laryngectomy, who underwent total aortic arch replacement for an aneurysm. Antero-lateral thoracotomy in the 2nd intercostal space with lower partial sternotomy( ALPS approach) provided an enough distance between the tracheostomy and the surgical field. It also provided a good view for surgical procedure and enabled the standard setup of cardiopulmonary bypass with ascending aortic cannulation, venous drainage from the right atrium and the left ventricular venting through the upper right pulmonary vein. The operation was completed in 345 minutes and the patient was discharged on the 11th postoperative day without any complications. PMID:27246136
Pisaniello, John D.; Tingey-Holyoak, Joanne; Burritt, Roger L.
2012-01-01
Small dam safety is generally being ignored. The potential for dam failure resulting in catastrophic consequences for downstream communities, property, and the environment, warrants exploration of the threats and policy issues associated with the management of small/farm dams. The paper achieves this through a comparative analysis of differing levels of dam safety assurance policy: absent, driven, strong, and model. A strategic review is undertaken to establish international dam safety policy benchmarks and to identify a best practice model. A cost-effective engineering/accounting tool is presented to assist the policy selection process and complement the best practice model. The paper then demonstrates the significance of the small-dam safety problem with a case study of four Australian States,policy-absent South Australia, policy-driven Victoria, policy-strong New South Wales, and policy-modelTasmania. Surveys of farmer behavior practices provide empirical evidence of the importance of policy and its proper implementation. Both individual and cumulative farm dam failure threats are addressed and, with supporting empirical evidence, the need for "appropriate" supervision of small dams is demonstrated. The paper adds to the existing international dam policy literature by identifying acceptable minimum level practice in private/farm dam safety assurance policy as well as updated international best practice policy guidelines while providing case study demonstration of how to apply the guidelines and empirical reinforcement of the need for "appropriate" policy. The policy guidelines, cost-effective technology, and comparative lessons presented can assist any jurisdiction to determine and implement appropriate dam safety policy.
Using deterministic methods for research reactor studies
International Nuclear Information System (INIS)
As an alternative to prohibitive Monte Carlo simulations, deterministic methods can be used to simulate research reactors. Using various microscopic cross section libraries currently available in Canada, flux distributions were obtained from DRAGON cell and supercell transport calculations. Then, homogenization/condensation is done to produce few-group nuclear properties, and diffusion calculations were performed using DONJON core models. In this paper, the multigroup modular environment of the code DONJON is presented, and the various steps required in the modelling of SLOWPOKE hexagonal cores are described. Numerical simulations are also compared with experimental data available for the EPM Slowpoke reactor. (author)
Deterministic quantum computation with one photonic qubit
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
Experimental Demonstration of Deterministic Entanglement Transformation
Institute of Scientific and Technical Information of China (English)
CHEN Geng; XU Jin-Shi; LI Chuan-Feng; GONG Ming; CHEN Lei; GUO Guang-Can
2009-01-01
According to Nielsen's theorem [Phys.Rev.Lett.83 (1999) 436]and as a proof of principle,we demonstrate the deterministic transformation from a maximum entangled state to an arbitrary nonmaximum entangled pure state with local operation and classical communication in an optical system.The output states are verified with a quantum tomography process.We further test the violation of Bell-like inequality to demonstrate the quantum nonlocality of the state we generated.Our results may be useful in quantum information processing.
Identifying left-right deterministic linear languages
CALERA RUBIO, JORGE; Oncina Carratalá, Jose
2004-01-01
Recently an algorithm to identify in the limit with polynomial time and data Left Deterministic Linear Languages (Left DLL) and, consequently Right DLL, was proposed. In this paper we show that the class of the Left-Right DLL formed by the union of both classes is also identifiable. To do that, we introduce the notion of n-negative characteristic sample, that is a sample that forces an inference algorithm to output an hypothesis of size bigger than n when strings from a non identifiable langu...
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Nine challenges for deterministic epidemic models
DEFF Research Database (Denmark)
Roberts, Mick G; Andreasen, Viggo; Lloyd, Alun;
2015-01-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections......, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern...
Hamada, Yuta
2015-01-01
We propose a novel leptogenesis scenario at the reheating era. Our setup is minimal in the sense that, in addition to the standard model Lagrangian, we only consider an inflaton and higher dimensional operators. The lepton number asymmetry is produced not by the decay of a heavy particle, but by the scattering between the standard model particles. After the decay of an inflaton, the model is described within the standard model with higher dimensional operators. The Sakharov's three conditions are satisfied by the following way. The violation of the lepton number is realized by the dimension-5 operator. The complex phase comes from the dimension-6 four lepton operator. The universe is out of equilibrium before the reheating is completed. It is found that the successful baryogenesis is realized for the wide range of parameters, the inflaton mass and reheating temperature, depending on the cutoff scale. Since we only rely on the effective Lagrangian, our scenario can be applicable to all mechanisms to generate n...
Piazza, Federico
2015-01-01
The minimal requirement for cosmography - a nondynamical description of the universe - is a prescription for calculating null geodesics, and timelike geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a w...
Piazza, Federico; Schücker, Thomas
2016-04-01
The minimal requirement for cosmography—a non-dynamical description of the universe—is a prescription for calculating null geodesics, and time-like geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons' redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a way that could be experimentally tested. An extremely tight constraint on the model, however, is represented by the blackbody-ness of the cosmic microwave background. Finally, as a check, we also consider the effects of a non-metric connection in a different set-up, namely, that of a static, spherically symmetric spacetime.
Directory of Open Access Journals (Sweden)
Cohen Anders
2011-09-01
Full Text Available Abstract Introduction The purpose of this study was to describe procedural details of a minimally invasive presacral approach for revision of an L5-S1 Axial Lumbar Interbody Fusion rod. Case presentation A 70-year-old Caucasian man presented to our facility with marked thoracolumbar scoliosis, osteoarthritic changes characterized by high-grade osteophytes, and significant intervertebral disc collapse and calcification. Our patient required crutches during ambulation and reported intractable axial and radicular pain. Multi-level reconstruction of L1-4 was accomplished with extreme lateral interbody fusion, although focal lumbosacral symptoms persisted due to disc space collapse at L5-S1. Lumbosacral interbody distraction and stabilization was achieved four weeks later with the Axial Lumbar Interbody Fusion System (TranS1 Inc., Wilmington, NC, USA and rod implantation via an axial presacral approach. Despite symptom resolution following this procedure, our patient suffered a fall six weeks postoperatively with direct sacral impaction resulting in symptom recurrence and loss of L5-S1 distraction. Following seven months of unsuccessful conservative care, a revision of the Axial Lumbar Interbody Fusion rod was performed that utilized the same presacral approach and used a larger diameter implant. Minimal adhesions were encountered upon presacral re-entry. A precise operative trajectory to the base of the previously implanted rod was achieved using fluoroscopic guidance. Surgical removal of the implant was successful with minimal bone resection required. A larger diameter Axial Lumbar Interbody Fusion rod was then implanted and joint distraction was re-established. The radicular symptoms resolved following revision surgery and our patient was ambulating without assistance on post-operative day one. No adverse events were reported. Conclusions The Axial Lumbar Interbody Fusion distraction rod may be revised and replaced with a larger diameter rod using
Minimal Poems Written in 1979 Minimal Poems Written in 1979
Sandra Sirangelo Maggio
2008-01-01
The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title) reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, h...
Deterministic Aided STAP for Target Detection in Heterogeneous Situations
Directory of Open Access Journals (Sweden)
J.-F. Degurse
2013-01-01
Full Text Available Classical space-time adaptive processing (STAP detectors are strongly limited when facing highly heterogeneous environments. Indeed, in this case, representative target free data are no longer available. Single dataset algorithms, such as the MLED algorithm, have proved their efficiency in overcoming this problem by only working on primary data. These methods are based on the APES algorithm which removes the useful signal from the covariance matrix. However, a small part of the clutter signal is also removed from the covariance matrix in this operation. Consequently, a degradation of clutter rejection performance is observed. We propose two algorithms that use deterministic aided STAP to overcome this issue of the single dataset APES method. The results on realistic simulated data and real data show that these methods outperform traditional single dataset methods in detection and in clutter rejection.
Classification and unification of the microscopic deterministic traffic models
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko Dimitrov
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles. PMID:26565284
Deterministic Function Computation with Chemical Reaction Networks
Chen, Ho-Lin; Soloveichik, David
2012-01-01
We study the deterministic computation of functions on tuples of natural numbers by chemical reaction networks (CRNs). CRNs have been shown to be efficiently Turing-universal when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates. We introduce the notion of function, rather than predicate, computation by representing the output of a function f:N^k --> N^l by a count of some molecular species, i.e., if the CRN starts with n_1,...,n_k molecules of some "input" species X_1,...,X_k, the CRN is guaranteed to converge to having f(n_1,...,n_k) molecules of the "output" species Y_1,...,Y_l. We show that a function f:N^k --> N^l is deterministically computed by a CRN if and only if its graph {(x,y) \\in N^k x N^l | f(x) = y} is a semilinear set. Finally, we show that each semilinear function f can be computed on input x in expected time O(polylog |x|).
Focusing a deterministic single-ion beam
International Nuclear Information System (INIS)
We focus down an ion beam consisting of single 40Ca+ ions to a spot size of a few micrometers using an einzel lens. Starting from a segmented linear Paul trap, we have implemented a procedure that allows us to deterministically load a predetermined number of ions by using the potential shaping capabilities of our segmented ion trap. For single-ion loading, an efficiency of 96.7(7)% has been achieved. These ions are then deterministically extracted out of the trap and focused down to a 1σ-spot radius of (4.6±1.3) μm at a distance of 257 mm from the trap center. Compared to previous measurements without ion optics, the einzel lens is focusing down the single-ion beam by a factor of 12. Due to the small beam divergence and narrow velocity distribution of our ion source, chromatic and spherical aberration at the einzel lens is vastly reduced, presenting a promising starting point for focusing single ions on their way to a substrate.
Plausible Suggestion for a Deterministic Wave Function
Schulz, P
2006-01-01
A deterministic axial vector model for photons is presented which is suitable also for particles. During a rotation around an axis the deterministic wave function a has the following form a = ws r exp(+-i wb t). ws is either the axial or scalar spin rotation frequency (the latter is proportional to the mass), r radius of the orbit (also amplitude of a vibration arising later from the interaction by fusing of two oppositely circling photons), wb orbital angular frequency (proportional to the velocity) and t time. "+" before the imaginary i means a right-handed and "-" a left-handed rotation. An interaction happens if particles (including the photons) meet themselves through collision and melt together. ----- Es wird ein deterministisches Drehvektor-Modell fuer Photonen vorgestellt, das auch fuer Teilchen geeignet ist. Bei einer Kreisbewegung um eine Achse hat die deterministische Wellenfunktion a die folgende Form a = ws r exp(+-i wb t). Dabei bedeuten ws entweder die axiale oder die skalare Spin-Kreisfrequenz...
Moment equations for a piecewise deterministic PDE
Bressloff, Paul C.; Lawley, Sean D.
2015-03-01
We analyze a piecewise deterministic PDE consisting of the diffusion equation on a finite interval Ω with randomly switching boundary conditions and diffusion coefficient. We proceed by spatially discretizing the diffusion equation using finite differences and constructing the Chapman-Kolmogorov (CK) equation for the resulting finite-dimensional stochastic hybrid system. We show how the CK equation can be used to generate a hierarchy of equations for the r-th moments of the stochastic field, which take the form of r-dimensional parabolic PDEs on {{Ω }r} that couple to lower order moments at the boundaries. We explicitly solve the first and second order moment equations (r = 2). We then describe how the r-th moment of the stochastic PDE can be interpreted in terms of the splitting probability that r non-interacting Brownian particles all exit at the same boundary; although the particles are non-interacting, statistical correlations arise due to the fact that they all move in the same randomly switching environment. Hence the stochastic diffusion equation describes two levels of randomness: Brownian motion at the individual particle level and a randomly switching environment. Finally, in the limit of fast switching, we use a quasi-steady state approximation to reduce the piecewise deterministic PDE to an SPDE with multiplicative Gaussian noise in the bulk and a stochastically-driven boundary.
... to your desktop! more... What Is Minimally Invasive Dentistry? Article Chapters What Is Minimally Invasive Dentistry? Minimally ... techniques. Reviewed: January 2012 Related Articles: Minimally Invasive Dentistry Minimally Invasive Veneers Dramatically Change Smiles What Patients ...
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide
International Nuclear Information System (INIS)
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References.
International Nuclear Information System (INIS)
Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the
Energy Technology Data Exchange (ETDEWEB)
Tanigaki, Nobuhiro, E-mail: tanigaki.nobuhiro@eng.nssmc.com [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (EUROPEAN OFFICE), Am Seestern 8, 40547 Dusseldorf (Germany); Ishida, Yoshihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., 46-59, Nakabaru, Tobata-ku, Kitakyushu, Fukuoka 804-8505 (Japan); Osada, Morihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (Head Office), Osaki Center Building 1-5-1, Osaki, Shinagawa-ku, Tokyo 141-8604 (Japan)
2015-03-15
Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the
International Nuclear Information System (INIS)
Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a
Directory of Open Access Journals (Sweden)
Szkup Peter L
2012-03-01
Full Text Available Abstract Introduction In the two cases described here, the subclavian artery was inadvertently cannulated during unsuccessful access to the internal jugular vein. The puncture was successfully closed using a closure device based on a collagen plug (Angio-Seal, St Jude Medical, St Paul, MN, USA. This technique is relatively simple and inexpensive. It can provide clinicians, such as intensive care physicians and anesthesiologists, with a safe and straightforward alternative to major surgery and can be a life-saving procedure. Case presentation In the first case, an anesthetist attempted ultrasound-guided access to the right internal jugular vein during the preoperative preparation of a 66-year-old Caucasian man. A 7-French (Fr triple-lumen catheter was inadvertently placed into his arterial system. In the second case, an emergency physician inadvertently placed a 7-Fr catheter into the subclavian artery of a 77-year-old Caucasian woman whilst attempting access to her right internal jugular vein. Both arterial punctures were successfully closed by means of a percutaneous closure device (Angio-Seal. No complications were observed. Conclusions Inadvertent subclavian arterial puncture can be successfully managed with no adverse clinical sequelae by using a percutaneous vascular closure device. This minimally invasive technique may be an option for patients with non-compressible arterial punctures. This report demonstrates two practical points that may help clinicians in decision-making during daily practice. First, it provides a practical solution to a well-known vascular complication. Second, it emphasizes a role for proper vascular ultrasound training for the non-radiologist.
International Nuclear Information System (INIS)
In future planned accelerator-driven subcritical systems, as well as in some recent related experiments, the neutron source to be used will be a pulsed accelerator. For such cases the application of the Feynman-alpha method for measuring the reactivity is not straightforward. The dependence of the Feynman Y(T) curve (variance-to-mean minus unity) on the measurement time T will show quasi-periodic ripples, corresponding to the periodicity of the source intensity. Correspondingly, the analytical solution will become much more complicated. One can perform such a pulsed Feynman-alpha measurement in two different ways: either by synchronizing the start of each measurement block with the pulses (deterministic pulsing) or by not synchronizing (random pulsing). The variance-to-mean has been derived analytically for both cases and reported briefly in previous publications. However, two different methods were used and the two cases were reported separately. In this paper we give a unified treatment and a comparative analysis of the two cases. It is found that the stochastic pulsing leads to an analytic solution that is much simpler than that for the deterministic case, and the relationship between the pulsed and continuous source is much more straightforward than in the deterministic case. However, the amplitude of the ripples, constituting a deviation of the pulsed Feynman Y curve from the smooth curve corresponding to the traditional constant source case, is much larger for the stochastic pulsing than for the deterministic one. The reasons for this are also analyzed in the paper. The results are in agreement with recent measurements, made by other groups in the European Community-supported project MUSE
Primality deterministic and primality probabilistic tests
Directory of Open Access Journals (Sweden)
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Deterministic cavity quantum electrodynamics with trapped ions
International Nuclear Information System (INIS)
We have employed radio-frequency trapping to localize a single 40Ca+-ion in a high-finesse optical cavity. By means of laser Doppler cooling, the position spread of the ion's wavefunction along the cavity axis was reduced to 42 nm, a fraction of the resonance wavelength of ionized calcium (λ = 397 nm). By controlling the position of the ion in the optical field, continuous and completely deterministic coupling of ion and field was realized. The precise three-dimensional location of the ion in the cavity was measured by observing the fluorescent light emitted upon excitation in the cavity field. The single-ion system is ideally suited to implement cavity quantum electrodynamics under cw conditions. To this end we operate the cavity on the D3/2-P1/2 transition of 40Ca+ (λ 866 nm). Applications include the controlled generation of single-photon pulses with high efficiency and two-ion quantum gates
Deterministic effects of interventional radiology procedures
International Nuclear Information System (INIS)
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Deterministic polishing from theory to practice
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Targeted activation in deterministic and stochastic systems
Eisenhower, Bryan; Mezić, Igor
2010-02-01
Metastable escape is ubiquitous in many physical systems and is becoming a concern in engineering design as these designs (e.g., swarms of vehicles, coupled building energetics, nanoengineering, etc.) become more inspired by dynamics of biological, molecular and other natural systems. In light of this, we study a chain of coupled bistable oscillators which has two global conformations and we investigate how specialized or targeted disturbance is funneled in an inverse energy cascade and ultimately influences the transition process between the conformations. We derive a multiphase averaged approximation to these dynamics which illustrates the influence of actions in modal coordinates on the coarse behavior of this process. An activation condition that predicts how the disturbance influences the rate of transition is then derived. The prediction tools are derived for deterministic dynamics and we also present analogous behavior in the stochastic setting and show a divergence from Kramers activation behavior under targeted activation conditions.
Mechanics From Newton's Laws to Deterministic Chaos
Scheck, Florian
2010-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...
Deterministic polarization chaos from a laser diode
Virte, Martin; Thienpont, Hugo; Sciamanna, Marc
2014-01-01
Fifty years after the invention of the laser diode and fourty years after the report of the butterfly effect - i.e. the unpredictability of deterministic chaos, it is said that a laser diode behaves like a damped nonlinear oscillator. Hence no chaos can be generated unless with additional forcing or parameter modulation. Here we report the first counter-example of a free-running laser diode generating chaos. The underlying physics is a nonlinear coupling between two elliptically polarized modes in a vertical-cavity surface-emitting laser. We identify chaos in experimental time-series and show theoretically the bifurcations leading to single- and double-scroll attractors with characteristics similar to Lorenz chaos. The reported polarization chaos resembles at first sight a noise-driven mode hopping but shows opposite statistical properties. Our findings open up new research areas that combine the high speed performances of microcavity lasers with controllable and integrated sources of optical chaos.
Deterministic seismic hazard macrozonation of India
Indian Academy of Sciences (India)
Sreevalsa Kolathayar; T G Sitharam; K S Vipin
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°–38°N and 68°–98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Deterministic-random separation in nonstationary regime
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
Simple deterministically constructed cycle reservoirs with regular jumps.
Rodan, Ali; Tiňo, Peter
2012-07-01
A new class of state-space models, reservoir models, with a fixed state transition structure (the "reservoir") and an adaptable readout from the state space, has recently emerged as a way for time series processing and modeling. Echo state network (ESN) is one of the simplest, yet powerful, reservoir models. ESN models are generally constructed in a randomized manner. In our previous study (Rodan & Tiňo, 2011), we showed that a very simple, cyclic, deterministically generated reservoir can yield performance competitive with standard ESN. In this contribution, we extend our previous study in three aspects. First, we introduce a novel simple deterministic reservoir model, cycle reservoir with jumps (CRJ), with highly constrained weight values, that has superior performance to standard ESN on a variety of temporal tasks of different origin and characteristics. Second, we elaborate on the possible link between reservoir characterizations, such as eigenvalue distribution of the reservoir matrix or pseudo-Lyapunov exponent of the input-driven reservoir dynamics, and the model performance. It has been suggested that a uniform coverage of the unit disk by such eigenvalues can lead to superior model performance. We show that despite highly constrained eigenvalue distribution, CRJ consistently outperforms ESN (which has much more uniform eigenvalue coverage of the unit disk). Also, unlike in the case of ESN, pseudo-Lyapunov exponents of the selected optimal CRJ models are consistently negative. Third, we present a new framework for determining the short-term memory capacity of linear reservoir models to a high degree of precision. Using the framework, we study the effect of shortcut connections in the CRJ reservoir topology on its memory capacity. PMID:22428595
C. BORGES, Rodrigo
2012-01-01
The duality between signal and noise is widely observed in the scientific context of the 20th century. Noise is retrospectively associated to nuisance, annoyance, and was even subjectively defined as a non-signal. Definition, anyway, takes noise away from its original meaning, turns it into signal and keeps this duality existing through time. This work treats the subject as a matter of perception, more specifically, as a matter of two different listening experiences for deterministic and non ...
Atomic routing in a deterministic queuing model
Directory of Open Access Journals (Sweden)
T.L. Werth
2014-03-01
We also consider the makespan objective (arrival time of the last user and show that optimal solutions and Nash equilibria in these games, where every user selfishly tries to minimize her travel time, can be found efficiently.
MIMO capacity for deterministic channel models: sublinear growth
DEFF Research Database (Denmark)
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix. In the...
Chaos in discrete maps, deterministic scattering, and nondifferentiable functions
International Nuclear Information System (INIS)
Arguments in favor of the nondifferentiability with respect to initial data of some functions associated with deterministic discrete-time dynamical systems are presented. A correspondence between a discrete-time dynamical system and a deterministic scattering model is found and used to interpret nondifferentiability conditions. A connection with random walks is also found
Recognition of deterministic ETOL languages in logarithmic space
DEFF Research Database (Denmark)
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian par...
FP/FIFO scheduling: coexistence of deterministic and probabilistic QoS guarantees
Directory of Open Access Journals (Sweden)
Pascale Minet
2007-01-01
Full Text Available In this paper, we focus on applications having quantitative QoS (Quality of Service requirements on their end-to-end response time (or jitter. We propose a solution allowing the coexistence of two types of quantitative QoS garantees, deterministic and probabilistic, while providing a high resource utilization. Our solution combines the advantages of the deterministic approach and the probabilistic one. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability that the response time exceeds a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet arrived first is scheduled first. We make no particular assumption concerning the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees.
Energy Technology Data Exchange (ETDEWEB)
Astakhov, Sergey V., E-mail: s.v.astakhov@gmail.com [Saratov State University, 410012 Saratov (Russian Federation); Anishchenko, Vadim S., E-mail: wadim@info.sgu.ru [Saratov State University, 410012 Saratov (Russian Federation)
2012-11-01
The relation between Lyapunov exponents, the Kolmogorov–Sinai entropy (KS-entropy) and the Afraimovich–Pesin dimension (AP-dimension) has been numerically analyzed in one- and two-dimensional chaotic maps. In our simulations we show that without noise the AP-dimension corresponds to the KS-entropy. In the presence of noise, the AP-dimension corresponds to the relative metric entropy. Since in a deterministic case the relative metric entropy corresponds to the KS-entropy the obtained results enable us to conclude that for considered chaotic maps of different dimension the AP-dimension corresponds to the relative metric entropy in both deterministic and stochastic cases.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-02-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.
International Nuclear Information System (INIS)
In this work were presented 22 cases of radiation deterministic effect in patients submitted to catheterism procedures by means of X-fluoroscope. Evaluation of the results suggest that the most of patients receive potential skin entrance doses over 2 Gy and some of them may have received doses over 12 Gy. At these doses, radiation induced erythema, ulceration and necrosis are all possible complications if the same entrance skin surface is exposed for the duration of the procedure
Melkikh, Alexei V.
2004-01-01
The possibility of a complicated internal structure of an elementary particle was analyzed. In this case a particle may represent a quantum computer with many degrees of freedom. It was shown that the probability of new species formation by means of random mutations is negligibly small. Deterministic model of evolution is considered. According to this model DNA nucleotides can change their state under the control of elementary particle internal degrees of freedom.
M. Barbolini; Keylock, C. J.
2002-01-01
The purpose of the present paper is to propose a new method for avalanche hazard mapping using a combination of statistical and deterministic modelling tools. The methodology is based on frequency-weighted impact pressure, and uses an avalanche dynamics model embedded within a statistical framework. The outlined procedure provides a useful way for avalanche experts to produce hazard maps for the typical case of avalanche sites where histor...
Convertito, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione OV, Napoli, Italia; Emolo, A.; Dipartimento di Scienze Fisiche Universita` degli Studi “Federico II” di Napoli; Zollo, A.; Dipartimento di Scienze Fisiche Universita` degli Studi “Federico II” di Napoli
2006-01-01
Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristi...
Deterministically Driven Avalanche Models of Solar Flares
Strugarek, Antoine; Joseph, Richard; Pirot, Dorian
2014-01-01
We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick--slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy loading process. The model design leads to a systematic deficit of small scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global st...
Simple Deterministically Constructed Recurrent Neural Networks
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
A Deterministic Approach to Earthquake Prediction
Directory of Open Access Journals (Sweden)
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Deterministic Random Walks on Regular Trees
Cooper, Joshua; Friedrich, Tobias; Spencer, Joel; 10.1002/rsa.20314
2010-01-01
Jim Propp's rotor router model is a deterministic analogue of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb. Probab. Comput. (2006)) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid $\\Z^d$ and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips on this vertex deviates from the expected number the random walk would have gotten there by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite $k$-ary tree ($k \\ge 3$), we show that for any deviation $D$ there is an initial configuration of chips such that after running the Propp model for a ...
Traffic chaotic dynamics modeling and analysis of deterministic network
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Minimal Poems Written in 1979 Minimal Poems Written in 1979
Directory of Open Access Journals (Sweden)
Sandra Sirangelo Maggio
2008-04-01
Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.
Secret Writing on Dirty Paper: A Deterministic View
El-Halabi, Mustafa; Georghiades, Costas
2011-01-01
Recently there has been a lot of success in using deterministic approach to provide approximate characterization of capacity for Gaussian networks. In this paper, we take a deterministic view and revisit the problem of wiretap channel with side information. A precise characterization of the secrecy capacity is obtained for a linear deterministic model, which naturally suggests a coding scheme which we show to achieve the secrecy capacity of the Gaussian model (dubbed as "secret writing on dirty paper") to within $(1/2)\\log3$ bits.
Equivalence relations between deterministic and quantum mechanical systems
International Nuclear Information System (INIS)
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Deterministic versus stochastic trends: Detection and challenges
Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.
2009-09-01
The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510
Longevity, Growth and Intergenerational Equity - The Deterministic Case
DEFF Research Database (Denmark)
Andersen, Torben M.; Gestsson, Marias Halldór
Challenges raised by ageing (increasing longevity) have prompted policy debates featuring policy proposals justified by reference to some notion of intergenerational equity. However, very different policies ranging from pre-savings to indexation of retirement ages have been justified in this way....... We develop an overlapping generations model in continuous time which encompasses different generations with different mortality rates and thus longevity. Allowing for both trend increases in longevity and productivity, we address the issue of intergenerational equity under a utilitarian criterion...... when future generations are better off in terms of both material and non-material well being. Increases in productivity and longevity are shown to have very different implications for intergenerational distribution....
Longevity, Growth and Intergenerational Equity: The Deterministic Case
DEFF Research Database (Denmark)
Andersen, Torben M.; Gestsson, Marias Halldór
2016-01-01
Challenges raised by aging (increasing longevity) have prompted policy debates featuring policy proposals justified by reference to some notion of intergenerational equity. However, very different policies ranging from presavings to indexation of retirement ages have been justified in this way. We...
Thumati, Prafulla; Reddy, K. Raghavendra
2013-01-01
Tooth wear and discoloration is a normal process in the life time of an individual. Severe wear and discoloration can result in cosmetic concern and loss of vertical dimension. These problems can best be treated by giving fixed prosthesis. This case provides the management using the concept of Minimally Invasive Cosmetic Dentistry (MICD) with Ceramopolymer as the restorative material. Computer Guided Occlusal Analysis (CGOA) was used for establishing uniform occlusal force distribution. Case ...
A Method to Separate Stochastic and Deterministic Information from Electrocardiograms
Gutíerrez, R M
2004-01-01
In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improuved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.
Pseudo-random number generator based on asymptotic deterministic randomness
International Nuclear Information System (INIS)
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Non deterministic finite automata for power systems fault diagnostics
Directory of Open Access Journals (Sweden)
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
Rock fracture characterization with GPR by means of deterministic deconvolution
Arosio, Diego
2016-03-01
In this work I address GPR characterization of rock fracture parameters, namely thickness and filling material. Rock fractures can generally be considered as thin beds, i.e., two interfaces whose separation is smaller than the resolution limit dictated by the Rayleigh's criterion. The analysis of the amplitude of the thin bed response in the time domain might permit to estimate fracture features for arbitrarily thin beds, but it is difficult to achieve and could be applied only to favorable cases (i.e., when all factors affecting amplitude are identified and corrected for). Here I explore the possibility to estimate fracture thickness and filling in the frequency domain by means of GPR. After introducing some theoretical aspects of thin bed response, I simulate GPR data on sandstone blocks with air- and water-filled fractures of known thickness. On the basis of some simplifying assumptions, I propose a 4-step procedure in which deterministic deconvolution is used to retrieve the magnitude and phase of the thin bed response in the selected frequency band. After deconvolved curves are obtained, fracture thickness and filling are estimated by means of a fitting process, which presents higher sensitivity to fracture thickness. Results are encouraging and suggest that GPR could be a fast and effective tool to determine fracture parameters in non-destructive manner. Further GPR experiments in the lab are needed to test the proposed processing sequence and to validate the results obtained so far.
Deterministic separation of suspended particles in a reconfigurable obstacle array
Du, Siqi; Drazer, German
2015-11-01
We use a macromodel of a flow-driven deterministic lateral displacement microfluidic system to investigate conditions leading to size-separation of suspended particles. This model system can be easily reconfigured to establish an arbitrary forcing angle, i.e. the orientation between the average flow field and the square array of cylindrical posts that constitutes the stationary phase. We also consider posts of different diameters, while maintaining a constant gap between them, to investigate the effect of obstacle size on particle separation. In all cases, we observe the presence of a locked mode at small forcing angles, in which particles move along a principal direction in the lattice. A locked-to-zigzag mode transition takes place when the orientation of the driving force reaches a critical angle. We show that the transition occurs at increasing angles for larger particles, thus enabling particle separation. Moreover, we observe a linear regression between the critical angle and the size of the particles, which allows us to estimate size-resolution in these systems. The presence of such a linear relation would guide the selection of the forcing angle in microfluidic systems, in which the direction of the flow field with respect to the array of obstacles is fixed. Finally, we present a simple model based on the presence of irreversible interactions between the suspended particles and the obstacles, which describes the observed dependence of the migration angle on the orientation of the average flow.
Nucleation theory beyond the deterministic limit. II. The growth stage.
Dubrovskii, V G; Nazarenko, M V
2010-03-21
This work addresses theory of nucleation and condensation based on the continuous Fokker-Plank type kinetic equation for the distribution of supercritical embryos over sizes beyond the deterministic limit. The second part of the work treats the growth stage and the beginning of the Ostwald ripening. We first study in detail the fluctuation-induced spreading of size spectrum at the growth stage. It is shown that the spectrum should be generally obtained by the convolution of the initial distribution with the Gaussian-like Green function with spreading dispersion. The increase in dispersion depends, however, on the growth index m as well as on the space dimension, and the mode of material influx. In particular, we find that the spreading effect on two-dimensional islands growing at a constant material influx is huge at m=1 but almost absent at m=2. Analytical and numerical solutions for the mean size, the dispersion, and the size spectrum are presented in different cases. Finally, the general condition for the stage of Ostwald ripening in an open system with material influx is discussed. PMID:20331306
An alternative approach to measure similarity between two deterministic transient signals
Shin, Kihong
2016-06-01
In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.
A Review of Deterministic Optimization Methods in Engineering and Management
Ming-Hua Lin; Jung-Fa Tsai; Chian-Son Yu
2012-01-01
With the increasing reliance on modeling optimization problems in practical applications, a number of theoretical and algorithmic contributions of optimization have been proposed. The approaches developed for treating optimization problems can be classified into deterministic and heuristic. This paper aims to introduce recent advances in deterministic methods for solving signomial programming problems and mixed-integer nonlinear programming problems. A number of important applications in engi...
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
Aviram, Amittai; Ford, Bryan
2009-01-01
The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Deterministic Feature Selection for $k$-means Clustering
Boutsidis, Christos; Magdon-Ismail, Malik
2011-01-01
We study feature selection for $k$-means clustering. Although the literature contains many methods with good empirical performance, algorithms with provable theoretical behavior have only recently been developed. Unfortunately, these algorithms are randomized and fail with, say, a constant probability. We address this issue by presenting a deterministic feature selection algorithm for k-means with theoretical guarantees. At the heart of our algorithm lies a deterministic method for decomposit...
HyDRa: control of parameters for deterministic polishing.
Ruiz, E; Salas, L; Sohn, E; Luna, E; Herrera, J; Quiros, F
2013-08-26
Deterministic hydrodynamic polishing with HyDRa requires a precise control of polishing parameters, such as propelling air pressure, slurry density, slurry flux and tool height. We describe the HyDRa polishing system and prove how precise, deterministic polishing can be achieved in terms of the control of these parameters. The polishing results of an 84 cm hyperbolic mirror are presented to illustrate how the stability of these parameters is important to obtain high-quality surfaces. PMID:24105579
Universal quantification for deterministic chaos in dynamical systems
Selvam, A. Mary
2000-01-01
A cell dynamical system model for deterministic chaos enables precise quantification of the round-off error growth,i.e., deterministic chaos in digital computer realizations of mathematical models of continuum dynamical systems. The model predicts the following: (a) The phase space trajectory (strange attractor) when resolved as a function of the computer accuracy has intrinsic logarithmic spiral curvature with the quasiperiodic Penrose tiling pattern for the internal structure. (b) The unive...
Secure communication scheme based on asymptotic model of deterministic randomness
International Nuclear Information System (INIS)
In this Letter, we introduce a new cryptosystem by integrating the asymptotic model of deterministic randomness with the one-way coupled map lattice (OCML) system. The key space, the encryption efficiency, and the security under various attacks are investigated. With the properties of deterministic randomness and spatiotemporal dynamics, the new scheme can improve the security to the order of computational precision, even when the lattice size is three only. Meanwhile, all the lattices can be fully utilized to increase the encryption speed
Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation
Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F
2002-01-01
Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...
Energy Technology Data Exchange (ETDEWEB)
Seubert, A.; Langenbuch, S.; Velkov, K.; Zwermann, W. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany). Forschungsinstitute
2007-07-01
An overview is given of the recent progress at GRS concerning deterministic transport and Monte Carlo methods with thermal-hydraulic feedback. The development of the time-dependent 3D discrete ordinates transport code TORT-TD is described which has also been coupled with ATHLET. TORT-TD/ATHLET allows 3D pin-by-pin coupled analyses of transients using few energy groups and anisotropic scattering. As a step towards Monte Carlo steady-state calculations with nuclear point data and thermal-hydraulic feedback, MCNP has been prepared to incorporate thermal-hydraulic parameters. Results obtained for selected test cases demonstrate the applicability of deterministic and Monte Carlo neutron transport models coupled with thermo-fluiddynamics. (orig.)
Esophagectomy - minimally invasive
Minimally invasive esophagectomy; Robotic esophagectomy; Removal of the esophagus - minimally invasive; Achalasia - esophagectomy; Barrett esophagus - esophagectomy; Esophageal cancer - esophagectomy - laparoscopic; Cancer of the ...
Deterministic chaos in the pitting phenomena of passivable alloys
International Nuclear Information System (INIS)
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl-]/[NO3-] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author)
On the Minimization of XML-Schemas and Tree Automata for Unranked Trees
Martens, Wim; Niehren, Joachim
2007-01-01
Automata for unranked trees form a foundation for XML schemas, querying and pattern languages. We study the problem of efficiently minimizing such automata. First, we study unranked tree automata that are standard in database theory, assuming bottom-up determinism and that horizontal recursion is represented by deterministic finite automata. We show that minimal automata in that class are not unique and that minimization is np complete. Second, we study more recent automata classes that do al...
Minimality of Symplectic Fiber Sums along Spheres
Dorfmeister, Josef G
2010-01-01
In this note we complete the discussion of minimality of symplectic fiber sums. We find, that for fiber sums along spheres the minimality of the sum is determined by the cases discussed by M. Usher and one additional case: If the sum is the result of the rational blow-down of a symplectic -4-sphere in X, then it is non-minimal if X contains a certain configuration of exceptional spheres in relation to this -4-sphere.
Deterministic and heuristic models of forecasting spare parts demand
Directory of Open Access Journals (Sweden)
Ivan S. Milojević
2012-04-01
Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing
Deterministic effects of the ionizing radiation
International Nuclear Information System (INIS)
Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency. PMID:23193246
Minimal Exit Trajectories with Optimum Correctional Manoeuvres
Directory of Open Access Journals (Sweden)
T. N. Srivastava
1980-10-01
Full Text Available Minimal exit trajectories with optimum correctional manoeuvers to a rocket between two coplaner, noncoaxial elliptic orbits in an inverse square gravitational field have been investigated. Case of trajectories with no correctional manoeuvres has been analysed. In the end minimal exit trajectories through specified orbital terminals are discussed and problem of ref. (2 is derived as a particular case.
The Cover Time of Deterministic Random Walks for General Transition Probabilities
Shiraga, Takeharu
2016-01-01
The deterministic random walk is a deterministic process analogous to a random walk. While there are some results on the cover time of the rotor-router model, which is a deterministic random walk corresponding to a simple random walk, nothing is known about the cover time of deterministic random walks emulating general transition probabilities. This paper is concerned with the SRT-router model with multiple tokens, which is a deterministic process coping with general transition probabilities ...
Directory of Open Access Journals (Sweden)
Wouter P. Kluijfhout
2015-01-01
Conclusion: 18F-Fluorocholine PET–CT is a promising new imaging modality for localizing parathyroid adenomas and enabling minimal invasive parathyroidectomy when conventional imaging fails to do. Clinicians should consider its use as a second line modality for optimal patient care.
Minimal surfaces in Riemannian manifolds
International Nuclear Information System (INIS)
A multiple solution to the Plateau problem in a Riemannian manifold is established. In Sn, the existence of two solutions to this problem is obtained. The Morse-Tompkins-Shiffman theorem is extended to the case when the ambient space admits no minimal sphere. (author). 20 refs
Deterministic approach for a C level microzonation of Bucharest
Moldoveanu, C. L.; Cioflan, C.; Radulian, M.; Apostol, B.; Panza, G. F.
2003-04-01
Bucharest experienced heavy destruction and large number of victims during the extreme 1940 (Mw=7.7) and 1977 (Mw=7.4) Vrancea intermediate-depth earthquakes (located in the 60-200 km depth interval, at the sharp bend of the Southeast Carpathians). The statistics indicate a recurrence interval of 25 years for Mw>6.0 and 50 years for Mw>7.0 in the region. The assessment and mitigation of the earthquake risk are therefore particularly important in the Romanian capital since more than 2 million inhabitants, major investments and vital life systems are concentrated there. The seismic ground motion evaluation implies the analysis of a large database of records provided by dense arrays of seismographs. This allows to define generally valid ground parameters to be used in seismic microzonation estimations. The strong motion registrations in Bucharest area are rather few and available only since 1977. Therefore the ability to compute numerically realistic seismograms represents in our opinion the key element for performing "present day" microzonation studies. Using a complex hybrid waveform modeling method that accounts for the source, wave propagation path and local site geology, we investigate which features of the seismic process are predictable in case of Vrancea dominant shocks, and how do they reproduce the strong local effects observed in Bucharest for the 1977 event. Our deterministic results show that the synthetic local hazard distribution supplies a realistic estimation of the seismic input. This method allows the merging of the different specific information collected until now in reasonably well constrained scenarios for a level C realistic microzonation of Bucharest area to be used to mitigate the effects of future strong events originating in Vrancea region.
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus;
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can...... be derived rigorously for low-pressure conditions from the microscopic model, which is characterized as a moderately interacting many-particle system, in the limit as the particle number tends to infinity. Also the mesoscopic model is given by a many-particle system. However, the particles move on a lattice......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Minimally Invasive Lumbar Discectomy
Full Text Available ... minimally invasive approach in terms of, you know, effectiveness of treating lumbar herniations? 2 Well, the minimally ... think it’s important to stress here that the effectiveness of this procedure is about the same as ...
A model of deterministic detector with dynamical decoherence
Lee, Jae Weon; Dmitri V. Averin; Benenti, Giuliano; Shepelyansky, Dima L.
2005-01-01
We discuss a deterministic model of detector coupled to a two-level system (a qubit). The detector is a quasi-classical object whose dynamics is described by the kicked rotator Hamiltonian. We show that in the regime of quantum chaos the detector acts as a chaotic bath and induces decoherence of the qubit. We discuss the dephasing and relaxation rates and demonstrate that several features of Ohmic baths can be reproduced by our fully deterministic model. Moreover, we show that, for strong eno...
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology
Deterministic Quantum Key Distribution Using Gaussian-Modulated Squeezed States
He, G; Zhu, J; He, Guangqiang; Zeng, Guihua; Zhu, Jun
2006-01-01
A continuous variable ping-pong scheme, which is utilized to generate deterministically private key, is proposed. The proposed scheme is implemented physically by using Gaussian-modulated squeezed states. The deterministic way, i.e., no basis reconciliation between two parties, leads a two-times efficiency comparing to the standard quantum key distribution schemes. Especially, the separate control mode does not need in the proposed scheme so that it is simpler and more available than previous ping-pong schemes. The attacker may be detected easily through the fidelity of the transmitted signal, and may not be successful in the beam splitter attack strategy.
Odén, Hanna
2010-01-01
The aim of this thesis is to give suggestions on what measures to take to improve the VOC emission situation in refineries in Tianjin, China, through existing technologies in refineries in Sweden. This has been done by identifying the main places of leakage in oil refineries in Sweden, identifying what VOC compounds are emitted from the plants and the amounts emitted, mapping out different measures taken by oil refineries in Sweden to minimize VOC emissions, evaluating the different measures ...
Frimpong, G. K.; Kottoh, I. D.; Ofosu, D. O.; Larbi, D.
2015-05-01
The effect of ionizing radiation on the microbiological quality on minimally processed carrot and lettuce was studied. The aim was to investigate the effect of irradiation as a sanitizing agent on the bacteriological quality of some raw eaten salad vegetables obtained from retailers in Accra, Ghana. Minimally processed carrot and lettuce were analysed for total viable count, total coliform count and pathogenic organisms. The samples collected were treated and analysed for a 15 day period. The total viable count for carrot ranged from 1.49 to 14.01 log10 cfu/10 g while that of lettuce was 0.70 to 8.5 7 log10 cfu/10 g. It was also observed that total coliform count for carrot was 1.46-7.53 log10 cfu/10 g and 0.14-7.35 log10 cfu/10 g for lettuce. The predominant pathogenic organisms identified were Bacillus cereus, Cronobacter sakazakii, Staphylococcus aureus, and Klebsiella spp. It was concluded that 2 kGy was most effective for medium dose treatment of minimally processed carrot and lettuce.
International Nuclear Information System (INIS)
The effect of ionizing radiation on the microbiological quality on minimally processed carrot and lettuce was studied. The aim was to investigate the effect of irradiation as a sanitizing agent on the bacteriological quality of some raw eaten salad vegetables obtained from retailers in Accra, Ghana. Minimally processed carrot and lettuce were analysed for total viable count, total coliform count and pathogenic organisms. The samples collected were treated and analysed for a 15 day period. The total viable count for carrot ranged from 1.49 to 14.01 log10 cfu/10 g while that of lettuce was 0.70 to 8.5 7 log10 cfu/10 g. It was also observed that total coliform count for carrot was 1.46–7.53 log10 cfu/10 g and 0.14–7.35 log10 cfu/10 g for lettuce. The predominant pathogenic organisms identified were Bacillus cereus, Cronobacter sakazakii, Staphylococcus aureus, and Klebsiella spp. It was concluded that 2 kGy was most effective for medium dose treatment of minimally processed carrot and lettuce. - Highlights: • The microbial load on the cut-vegetables was beyond acceptable level for consumption. • The microbial contamination of carrot was found to be higher than that of lettuce. • 2 kGy was most appropriate in treating cut-vegetables for microbial safety
Cohen, Michael F.; Buehler, Chris; Gortler, Steven; Mcmillan, Leonard
2002-01-01
Determining shape from stereo has often been posed as a global minimization problem. Once formulated, the minimization problems are then solved with a variety of algorithmic approaches. These approaches include techniques such as dynamic programming min-cut and alpha-expansion. In this paper we show how an algorithmic technique that constructs a discrete spatial minimal cost surface can be brought to bear on stereo global minimization problems. This problem can then be reduced to a single min...
Avicenna's Deterministic Theory of Action and its Implication for a Theory of Justice
Directory of Open Access Journals (Sweden)
LaHood, Gabriel
2003-01-01
Full Text Available In this essay two issues are critically addressed, namely, the foundation of Avicenna's ethical determinism and its implication for a theory of justice. As to the first issue, it regards the analysis of Avicenna’s deterministic theory notwithstanding his rare but ambiguous use of the free will, suggesting terms such as "will", "voluntary" and "choice". In such a theory, where everything is governed by the laws of pre-established harmony, the ethical evil done by man is viewed in the same way physical evil is, as contingent, minimal, determined by God, and having its proper function within world order and harmony. As to Avicenna's justification of punishment, one must recognize that Avicenna did not address the issue in its socio-juridical context. Rather, he addressed it from a religious point of view, but the implication for a theory of social justice seems to be obvious: because of universal determinism, including man's actions, all threats and promises (as well as punishment by human civil courts have a deterrent function. Objections are raised against this deterministic philosophy to show that it is founded on a misleading argument of order and harmony. More objections are raised to show that Avicenna's conception of justice, based on determinism is inhumane and unsatisfactory.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Minimal distances between SCFTs
Buican, Matthew
2014-01-01
We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For = 1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from =2 UV SCFTs is more subtle. We argue that for RG flows preserving the full =2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break = 2 → = 1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.
Minimal distances between SCFTs
Energy Technology Data Exchange (ETDEWEB)
Buican, Matthew [Department of Physics and Astronomy, Rutgers University,Piscataway, NJ 08854 (United States)
2014-01-28
We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For N=1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from N=2 UV SCFTs is more subtle. We argue that for RG flows preserving the full N=2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break N=2→N=1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.
Deterministic mathematical morphology for CAD/CAM
Sarabia Pérez, Rubén; Jimeno Morenilla, Antonio; Molina Carmona, Rafael
2014-01-01
Purpose – The purpose of this paper is to present a new geometric model based on the mathematical morphology paradigm, specialized to provide determinism to the classic morphological operations. The determinism is needed to model dynamic processes that require an order of application, as is the case for designing and manufacturing objects in CAD/CAM environments. Design/methodology/approach – The basic trajectory-based operation is the basis of the proposed morphological specialization. This ...
An Eﬃcient and Flexible Deterministic Framework for Multithreaded Programs
Institute of Scientific and Technical Information of China (English)
卢凯; 周旭; 王小平; 陈沉
2015-01-01
Determinism is very useful to multithreaded programs in debugging, testing, etc. Many deterministic ap-proaches have been proposed, such as deterministic multithreading (DMT) and deterministic replay. However, these sys-tems either are ineﬃcient or target a single purpose, which is not flexible. In this paper, we propose an eﬃcient and flexible deterministic framework for multithreaded programs. Our framework implements determinism in two steps: relaxed determinism and strong determinism. Relaxed determinism solves data races eﬃciently by using a proper weak memory consistency model. After that, we implement strong determinism by solving lock contentions deterministically. Since we can apply different approaches for these two steps independently, our framework provides a spectrum of deterministic choices, including nondeterministic system (fast), weak deterministic system (fast and conditionally deterministic), DMT system, and deterministic replay system. Our evaluation shows that the DMT configuration of this framework could even outperform a state-of-the-art DMT system.
Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives
DEFF Research Database (Denmark)
Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle; Pedersen, Tom Søndergaard; Mikkelsen, H.F.
In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....
Yang, Bo; Monterola, Christopher
2015-01-01
We show that all existing deterministic microscopic traffic models with identical drivers (including both two-phase and three-phase models) can be understood as special cases from a master model by expansion around well-defined ground states. This allows two traffic models to be compared in a well-defined way. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver models (IDM) is...
Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.
Szymanowski, Mariusz; Kryza, Maciej
2015-11-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Brazier John E; Walters Stephen J
2003-01-01
Abstract Background The SF-6D is a new single summary preference-based measure of health derived from the SF-36. Empirical work is required to determine what is the smallest change in SF-6D scores that can be regarded as important and meaningful for health professionals, patients and other stakeholders. Objectives To use anchor-based methods to determine the minimally important difference (MID) for the SF-6D for various datasets. Methods All responders to the original SF-36 questionnaire can ...
Deterministic teleportation using single-photon entanglement as a resource
DEFF Research Database (Denmark)
Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.
2012-01-01
We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...
Scheme for deterministic Bell-state-measurement-free quantum teleportation
Yang, Ming; Cao, Zhuo-Liang
2004-01-01
A deterministic teleportation scheme for unknown atomic states is proposed in cavity QED. The Bell state measurement is not needed in the teleportation process, and the success probability can reach 1.0. In addition, the current scheme is insensitive to the cavity decay and thermal field.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
A Deterministic Annealing Approach to Clustering AIRS Data
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we p
Line and lattice networks under deterministic interference models
Goseling, Jasper; Gastpar, Michael; Weber, Jos H.
2011-01-01
Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of re
Deterministic event-based simulation of quantum phenomena
De Raedt, K; De Raedt, H; Michielsen, K
2005-01-01
We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the inform
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
DEFF Research Database (Denmark)
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus; Rodriguez, Pedro
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...
Applicability of deterministic propagation models for mobile operators
Mantel, O.C.; Oostveen, J.C.; Popova, M.P.
2007-01-01
Deterministic propagation models based on ray tracing or ray launching are widely studied in the scientific literature, because of their high accuracy. Also many commercial propagation modelling tools include ray-based models. In spite of this, they are hardly used in commercial operations by cellul
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
Deterministic and Monte Carlo transport models with thermal-hydraulic feedback
Energy Technology Data Exchange (ETDEWEB)
Seubert, A.; Langenbuch, S.; Velkov, K.; Zwermann, W. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany)
2008-07-01
This paper gives an overview of recent developments concerning deterministic transport and Monte Carlo methods with thermal-hydraulic feedback. The timedependent 3D discrete ordinates transport code TORT-TD allows pin-by-pin analyses of transients using few energy groups and anisotropic scattering by solving the timedependent transport equation using the unconditionally stable implicit method. To account for thermal-hydraulic feedback, TORT-TD has been coupled with the system code ATHLET. Applications to, e.g., a control rod ejection in a 2 x 2 PWR fuel assembly arrangement demonstrate the applicability of the coupled code TORT-TD/ATHLET for test cases. For Monte Carlo steady-state calculations with nuclear point data and thermalhydraulic feedback, MCNP has been prepared to incorporate thermal-hydraulic parameters. As test case has been chosen the uncontrolled steady state of the 2 x 2 PWR fuel assembly arrangement for which the thermal-hydraulic parameter distribution has been obtained from a preceding coupled TORT-TD/ATHLET analysis. The result demonstrates the applicability of MCNP to problems with spatial distributions of thermal-fluiddynamic parameters. The comparison with MCNP results confirms that the accuracy of deterministic transport calculations with pin-wise homogenised few-group cross sections is comparable to Monte Carlo simulations. The presented cases are considered as a pre-stage of performing calculations of larger configurations like a quarter core which is in preparation. (orig.)
Deterministic and Monte Carlo transport models with thermal-hydraulic feedback
International Nuclear Information System (INIS)
This paper gives an overview of recent developments concerning deterministic transport and Monte Carlo methods with thermal-hydraulic feedback. The timedependent 3D discrete ordinates transport code TORT-TD allows pin-by-pin analyses of transients using few energy groups and anisotropic scattering by solving the timedependent transport equation using the unconditionally stable implicit method. To account for thermal-hydraulic feedback, TORT-TD has been coupled with the system code ATHLET. Applications to, e.g., a control rod ejection in a 2 x 2 PWR fuel assembly arrangement demonstrate the applicability of the coupled code TORT-TD/ATHLET for test cases. For Monte Carlo steady-state calculations with nuclear point data and thermalhydraulic feedback, MCNP has been prepared to incorporate thermal-hydraulic parameters. As test case has been chosen the uncontrolled steady state of the 2 x 2 PWR fuel assembly arrangement for which the thermal-hydraulic parameter distribution has been obtained from a preceding coupled TORT-TD/ATHLET analysis. The result demonstrates the applicability of MCNP to problems with spatial distributions of thermal-fluiddynamic parameters. The comparison with MCNP results confirms that the accuracy of deterministic transport calculations with pin-wise homogenised few-group cross sections is comparable to Monte Carlo simulations. The presented cases are considered as a pre-stage of performing calculations of larger configurations like a quarter core which is in preparation. (orig.)
The integrated model for solving the single-period deterministic inventory routing problem
Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik
2016-08-01
This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
HINO, MAYO; Yamaguchi, Ken; Abiko, Kaoru; YOSHIOKA, YUMIKO; HAMANISHI, JUNZO; Kondoh, Eiji; Koshiyama, Masafumi; Baba, Tsukasa; Matsumura, Noriomi; Minamiguchi, Sachiko; Kido, Aki; Konishi, Ikuo
2016-01-01
Our group previously documented the first, very rare case of primary gastric-type mucinous adenocarcinoma of the uterine corpus. Although this type of endometrial cancer appears to be similar to the gastric-type adenocarcinoma of the uterine cervix, its main symptoms, appearance on magnetic resonance imaging (MRI) and prognosis have not been fully elucidated due to its rarity. We herein describe an additional case of gastric-type mucinous adenocarcinoma of the endometrium and review the relev...
Minimal Distances Between SCFTs
Buican, Matthew
2013-01-01
We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For N=1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from N=2 UV SCFTs is more subtle. We argue that for RG flows preserving the full N=2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows...
Esophagectomy - minimally invasive
... Robotic esophagectomy; Removal of the esophagus - minimally invasive; Achalasia - esophagectomy; Barrett esophagus - esophagectomy; Esophageal cancer - esophagectomy - laparoscopic; Cancer of the esophagus -esophagectomy - ...
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Energy Technology Data Exchange (ETDEWEB)
Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.
2014-04-01
Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock
International Nuclear Information System (INIS)
Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock
DEFF Research Database (Denmark)
2010-01-01
Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....
Salsamendi, Jason; Pereira, Keith; Kang, Kyungmin; Fan, Ji
2015-09-01
Nonalcoholic fatty liver disease (NAFLD) represents a spectrum of disorders from simple steatosis to inflammation leading to fibrosis, cirrhosis, and even hepatocellular carcinoma. With the progressive epidemics of obesity and diabetes, major risk factors in the development and pathogenesis of NAFLD, the prevalence of NAFLD and its associated complications including liver failure and hepatocellular carcinoma is expected to increase by 2030 with an enormous health and economic impact. We present a patient who developed Hepatocellular carcinoma (HCC) from nonalcoholic steatohepatitis (NASH) cirrhosis. Due to morbid obesity, she was not an optimal transplant candidate and was not initially listed. After attempts for lifestyle modifications failed to lead to weight reduction, a transarterial embolization of the left gastric artery was performed. This is the sixth such procedure in humans in literature. Subsequently she had a meaningful drop in BMI from 42 to 36 over the following 6 months ultimately leading to her being listed for transplant. During this time, the left hepatic HCC was treated with chemoembolization without evidence of recurrence. In this article, we wish to highlight the use of minimally invasive percutaneous endovascular therapies such as transarterial chemoembolization (TACE) in the comprehensive management of the NAFLD spectrum and percutaneous transarterial embolization of the left gastric artery (LGA), a novel method, for the management of obesity. PMID:26629307
MOx benchmark calculations by deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Highlights: ► MOx based depletion calculation. ► Methodology to create continuous energy pseudo cross section for lump of minor fission products. ► Mass inventory comparison between deterministic and Monte Carlo codes. ► Higher deviation was found for several isotopes. - Abstract: A depletion calculation benchmark devoted to MOx fuel is an ongoing objective of the OECD/NEA WPRS following the study of depletion calculation concerning UOx fuels. The objective of the proposed benchmark is to compare existing depletion calculations obtained with various codes and data libraries applied to fuel and back-end cycle configurations. In the present work the deterministic code NEWT/ORIGEN-S of the SCALE6 codes package and the Monte Carlo based code MONTEBURNS2.0 were used to calculate the masses of inventory isotopes. The methodology to apply the MONTEBURNS2.0 to this benchmark is also presented. Then the results from both code were compared.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Deterministic chaos at the ocean surface: applications and interpretations
Directory of Open Access Journals (Sweden)
A. J. Palmer
1998-01-01
Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.
On the secure obfuscation of deterministic finite automata.
Energy Technology Data Exchange (ETDEWEB)
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic generation of multiparticle entanglement by quantum Zeno dynamics
Barontini, Giovanni; Haas, Florian; Estève, Jérôme; Reichel, Jakob
2016-01-01
Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications.
Deterministic combination of numerical and physical coastal wave models
DEFF Research Database (Denmark)
Zhang, H.W.; Schäffer, Hemming Andreas; Jakobsen, K.P.
2007-01-01
the interface between the numerical and physical models. The link between numerical and physical models is given by an ad hoc unified wave generation theory which is devised in the study. This wave generation theory accounts for linear dispersion and shallow water non-linearity. Local wave phenomena......A deterministic combination of numerical and physical models for coastal waves is developed. In the combined model, a Boussinesq model MIKE 21 BW is applied for the numerical wave computations. A piston-type 2D or 3D wavemaker and the associated control system with active wave absorption provides...... (evanescent modes) near the wavemaker are taken into account. With this approach, the data transfer between the two models is thus on a deterministic level with detailed wave information transmitted along the wavemaker....
Deterministic remote two-qubit state preparation in dissipative environments
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
The road to deterministic matrices with the restricted isometry property
Bandeira, Afonso S; Mixon, Dustin G; Wong, Percy
2012-01-01
The restricted isometry property (RIP) is a well-known matrix condition that provides state-of-the-art reconstruction guarantees for compressed sensing. While random matrices are known to satisfy this property with high probability, deterministic constructions have found less success. In this paper, we consider various techniques for demonstrating RIP deterministically, some popular and some novel, and we evaluate their performance. In evaluating some techniques, we apply random matrix theory and inadvertently find a simple alternative proof that certain random matrices are RIP. Later, we propose a particular class of matrices as candidates for being RIP, namely, equiangular tight frames (ETFs). Using the known correspondence between real ETFs and strongly regular graphs, we investigate certain combinatorial implications of a real ETF being RIP. Specifically, we give probabilistic intuition for a new bound on the clique number of Paley graphs of prime order, and we conjecture that the corresponding ETFs are R...
Numerical method for impulse control of Piecewise Deterministic Markov Processes
de Saporta, Benoîte
2010-01-01
This paper presents a numerical method to calculate the value function for a general discounted impulse control problem for piecewise deterministic Markov processes. Our approach is based on a quantization technique for the underlying Markov chain defined by the post jump location and inter-arrival time. Convergence results are obtained and more importantly we are able to give a convergence rate of the algorithm. The paper is illustrated by a numerical example.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, Birgitta
2015-01-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average sense locally optimal (ASLO) feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols which can d...
Deterministic event-based simulation of quantum interference
De Raedt, K.; De Raedt, H.; Michielsen, K.
2004-01-01
We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the information of the individual events. We also demonstrate that properly designed networks of these machines exhibit behavior that is usually only attributed to quantum systems. We present networks that s...
Deterministic event-based simulation of quantum phenomena
De Raedt, K.; De Raedt, H.; Michielsen, K.
2005-01-01
We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the information of the individual events. We also demonstrate that properly designed networks of these machines exhibit behavior that is usually only attributed to quantum systems. We present networks that s...
Deterministic linear optics quantum computation utilizing linked photon circuits
Yoran, N; Yoran, Nadav; Reznik, Benni
2003-01-01
We suggest an efficient scheme for quantum computation with linear optical elements utilizing "linked" photon states. The linked states are designed according to the particular quantum circuit one wishes to process. Once a linked-state has been successfully prepared, the computation is pursued deterministically by a sequence of teleportation steps. The present scheme enables a significant reduction of the average number of elementary gates per logical gate to about 20-30 CZ_{9/16} gates.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
Receding Horizon Temporal Logic Control for Finite Deterministic Systems
Ding, Xuchu; Lazar, Mircea; Belta, Calin
2012-01-01
This paper considers receding horizon control of finite deterministic systems, which must satisfy a high level, rich specification expressed as a linear temporal logic formula. Under the assumption that time-varying rewards are associated with states of the system and they can be observed in real-time, the control objective is to maximize the collected reward while satisfying the high level task specification. In order to properly react to the changing rewards, a controller synthesis framewor...
Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle
Takashi Kamihigashi; Masayuki Yao
2015-01-01
We consider infinite-horizon deterministic dynamic programming problems in discrete time. We show that the value function is always a fixed point of a modified version of the Bellman operator. We also show that value iteration monotonically converges to the value function if the initial function is dominated by the value function, is mapped upward by the modified Bellman operator, and satisfies a transversality-like condition. These results require no assumption except for the general framewo...
Scaling Mobility Patterns and Collective Movements: Deterministic Walks in Lattices
Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2010-01-01
Scaling mobility patterns have been widely observed for animals. In this paper, we propose a deterministic walk model to understand the scaling mobility patterns, where walkers take the least-action walks on a lattice landscape and prey. Scaling laws in the displacement distribution emerge when the amount of prey resource approaches the critical point. Around the critical point, our model generates ordered collective movements of walkers with a quasi-periodic synchronization of walkers' direc...
Adaptive correction of deterministic models to produce probabilistic forecasts
Smith, P. J.; K. J. Beven; A. H. Weerts; D. Leedal
2012-01-01
This paper considers the correction of deterministic forecasts given by a flood forecasting model. A stochastic correction based on the evolution of an adaptive, multiplicative, gain is presented. A number of models for the evolution of the gain are considered and the quality of the resulting probabilistic forecasts assessed. The techniques presented offers a computationally efficient method for providing probabilistic forecasts based on existing flood forecasting system output.
Notes on Deterministic Programming of Quantum Observables and Channels
Heinosaari, Teiko; Tukiainen, Mikko
2014-01-01
We study the limitations of deterministic programmability of quantum circuits, e.g., quantum computer. More precisely, we analyse the programming of quantum observables and channels via quantum multimeters. We show that the programming vectors for any two different sharp observables are necessarily orthogonal, whenever post-processing is not allowed. This result then directly implies that also any two different unitary channels require orthogonal programming vectors. This approach generalizes...
Uniform Deterministic Discrete Method for Three Dimensional Systems
Institute of Scientific and Technical Information of China (English)
无
1997-01-01
For radiative direct exchange areas in three dimensional system,the Uniform Deterministic Discrete Method(UDDM) was adopted.The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs.The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numercal accuracy.
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively ...
On the Power of Deterministic Mechanisms for Facility Location Games
Fotakis, Dimitris; Tzamos, Christos
2012-01-01
We consider K-Facility Location games, where n strategic agents report their locations in a metric space, and a mechanism maps them to K facilities. Our main result is an elegant characterization of deterministic strategyproof mechanisms with a bounded approximation ratio for 2-Facility Location on the line. In particular, we show that for instances with n \\geq 5 agents, any such mechanism either admits a unique dictator, or always places the facilities at the leftmost and the rightmost locat...
Deterministic and stochastic study of wind farm harmonic currents
Sainz Sapera, Luis; Mesas García, Juan José; Teodorescu, Remus; Rodríguez Cortés, Pedro
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18MWwind farm are investigated using extensive measurements, and the deterministic and stochastic characterization of wind farm harmonic currents is analyzed. Specific issues addressed in the paper include the harmonic variation with the wind farm operating point and the random char...
Multidirectional sorting modes in deterministic lateral displacement devices
DEFF Research Database (Denmark)
Long, B.R.; Heller, Martin; Beech, J.P.;
2008-01-01
Deterministic lateral displacement (DLD) devices separate micrometer-scale particles in solution based on their size using a laminar microfluidic flow in an array of obstacles. We investigate array geometries with rational row-shift fractions in DLD devices by use of a simple model including both...... advection and diffusion. Our model predicts multidirectional sorting modes that could be experimentally tested in high-throughput DLD devices containing obstacles that are much smaller than the separation between obstacles....
Deterministically – Probabilistic Approach for Determining the Steels Elasticity Modules
Directory of Open Access Journals (Sweden)
Popov Alexander
2015-03-01
Full Text Available The known deterministic relationships to estimate the elastic characteristics of materials are not well accounted for significant variability of these parameters in solids. Therefore, it is given a probabilistic approach to determine the modules of elasticity, adopted to random values, which increases the accuracy of the obtained results. By an ultrasonic testing, a non-destructive evaluation of the investigated steels structure and properties has been made.
Spatiotemporal calibration and resolution refinement of output from deterministic models.
Gilani, Owais; McKay, Lisa A; Gregoire, Timothy G; Guan, Yongtao; Leaderer, Brian P; Holford, Theodore R
2016-06-30
Spatiotemporal calibration of output from deterministic models is an increasingly popular tool to more accurately and efficiently estimate the true distribution of spatial and temporal processes. Current calibration techniques have focused on a single source of data on observed measurements of the process of interest that are both temporally and spatially dense. Additionally, these methods often calibrate deterministic models available in grid-cell format with pixel sizes small enough that the centroid of the pixel closely approximates the measurement for other points within the pixel. We develop a modeling strategy that allows us to simultaneously incorporate information from two sources of data on observed measurements of the process (that differ in their spatial and temporal resolutions) to calibrate estimates from a deterministic model available on a regular grid. This method not only improves estimates of the pollutant at the grid centroids but also refines the spatial resolution of the grid data. The modeling strategy is illustrated by calibrating and spatially refining daily estimates of ambient nitrogen dioxide concentration over Connecticut for 1994 from the Community Multiscale Air Quality model (temporally dense grid-cell estimates on a large pixel size) using observations from an epidemiologic study (spatially dense and temporally sparse) and Environmental Protection Agency monitoring stations (temporally dense and spatially sparse). Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790617
A Semi-Deterministic Channel Model for VANETs Simulations
Directory of Open Access Journals (Sweden)
Jonathan Ledy
2012-01-01
Full Text Available Today's advanced simulators facilitate thorough studies on Vehicular Ad hoc NETworks (VANETs. However the choice of the physical layer model in such simulators is a crucial issue that impacts the results. A solution to this challenge might be found with a hybrid model. In this paper, we propose a semi-deterministic channel propagation model for VANETs called UM-CRT. It is based on CRT (Communication Ray Tracer and SCME—UM (Spatial Channel Model Extended—Urban Micro which are, respectively, a deterministic channel simulator and a statistical channel model. It uses a process which adjusts the statistical model using relevant parameters obtained from the deterministic simulator. To evaluate realistic VANET transmissions, we have integrated our hybrid model in fully compliant 802.11 p and 802.11 n physical layers. This framework is then used with the NS-2 network simulator. Our simulation results show that UM-CRT is adapted for VANETs simulations in urban areas as it gives a good approximation of realistic channel propagation mechanisms while improving significantly simulation time.
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 Mw=7.1; May 30,1990 Mw = 6.9 and October 27, 2004 Mw = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Hybrid Deterministic-Monte Carlo Methods for Neutral Particle Transport
International Nuclear Information System (INIS)
In the history of transport analysis methodology for nuclear systems, there have been two fundamentally different methods, i.e., deterministic and Monte Carlo (MC) methods. Even though these two methods coexisted for the past 60 years and are complementary each other, they never been coded in the same computer codes. Recently, however, researchers have started to consider to combine these two methods in a computer code to make use of the strengths of two algorithms and avoid weaknesses. Although the advanced modern deterministic techniques such as method of characteristics (MOC) can solve a multigroup transport equation very accurately, there are still uncertainties in the MOC solutions due to the inaccuracy of the multigroup cross section data caused by approximations in the process of multigroup cross section generation, i.e., equivalence theory, interference effects, etc. Conversely, the MC method can handle the resonance shielding effect accurately when sufficiently many neutron histories are used but it takes a long calculation time. There was also a research to combine a multigroup transport and a continuous energy transport solver in a computer code system depending on the energy range. This paper proposes a hybrid deterministic-MC method in which a multigroup MOC method is used for high and low energy range and continuous MC method is used for the intermediate resonance energy range for efficient and accurate transport analysis
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Demographic noise can reverse the direction of deterministic selection.
Constable, George W A; Rogers, Tim; McKane, Alan J; Tarnita, Corina E
2016-08-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to [Formula: see text] theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Convergence studies of deterministic methods for LWR explicit reflector methodology
International Nuclear Information System (INIS)
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Gama-Rodrigues Joaquim J.; Silva José Hyppolito da; Aisaka Adilson A.; Jureidini Ricardo; Falci Júnior Renato; Maluf Filho Fauze; Chong A. Kim; Tsai André Wan Wen; Bresciani Cláudio
2000-01-01
The Peutz-Jeghers syndrome is a hereditary disease that requires frequent endoscopic and surgical intervention, leading to secondary complications such as short bowel syndrome. CASE REPORT: This paper reports on a 15-year-old male patient with a family history of the disease, who underwent surgery for treatment of an intestinal occlusion due to a small intestine intussusception. DISCUSSION: An intra-operative fiberscopic procedure was included for the detection and treatment of numerous polyp...
Optimal Dividend Payments for the Piecewise-Deterministic Poisson Risk Model
Feng, Runhuan; Zhu, Chao
2011-01-01
This paper deals with optimal dividend payment problem in the general setup of a piecewise-deterministic compound Poisson risk model. The objective of an insurance business under consideration is to maximize the expected discounted dividend payout up to the time of ruin. Both restricted and unrestricted payment schemes are considered. In the case of restricted payment scheme, the value function is shown to be a classical solution of the corresponding Hamilton-Jacobi-Bellman equation, which, in turn, leads to an optimal restricted dividend payment policy. When the claims are exponentially distributed, the value function and an optimal dividend payment policy of the threshold type are determined in closed forms under certain conditions. The case of unrestricted payment scheme gives rise to a singular stochastic control problem. By solving the associated integro-differential quasi-variational inequality, the value function and an optimal barrier strategy are determined explicitly in exponential claim size distri...
Algorithms for Deterministic Call Admission Control of Pre-stored VBR Video Streams
Directory of Open Access Journals (Sweden)
Christos Tryfonas
2009-08-01
Full Text Available We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accommodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm along with the experimental results we provide indicate that the proposed algorithm is suitable for real-time determination of the time displacement parameter during the call admission phase.
International Nuclear Information System (INIS)
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability
Systematic and Deterministic Graph-Minor Embedding of Cartesian Products of Complete Graphs
Zaribafiyan, Arman; Marchand, Dominic J. J.; Changiz Rezaei, Seyed Saeed
The limited connectivity of current and next-generation quantum annealers motivates the need for efficient graph-minor embedding methods. The overhead of the widely used heuristic techniques is quickly proving to be a significant bottleneck for real-world applications. To alleviate this obstacle, we propose a systematic deterministic embedding method that exploits the structures of both the input graph of the specific combinatorial optimization problem and the quantum annealer. We focus on the specific case of the Cartesian product of two complete graphs, a regular structure that occurs in many problems. We first divide the problem by embedding one of the factors of the Cartesian product in a repeatable unit. The resulting simplified problem consists of placing copies of this unit and connecting them together appropriately. Aside from the obvious speed and efficiency advantages of a systematic deterministic approach, the embeddings produced can be easily scaled for larger processors and show desirable properties with respect to the number of qubits used and the chain length distribution.
International Nuclear Information System (INIS)
An integrated predictive model is being developed to account for the effects of localized environmental conditions in crevices on the initiation and propagation of pits. A deterministic calculation is used to estimate the accumulation of hydrogen ions (pH suppression) in the crevice solution due to the hydrolysis of dissolved metals. Pit initiation and growth within the crevice is then dealt with by either a probabilistic model, or an equivalent deterministic model. Ultimately, the role of intergranular corrosion will have to be considered. While the strategy presented here is very promising, the integrated model is not yet ready for precise quantitative predictions. Empirical expressions for the rate of penetration based upon experimental crevice corrosion data can be used in the interim period, until the integrated model can be refined. Bounding calculations based upon such empirical expressions can provide important insight into worst-case scenarios
Energy Technology Data Exchange (ETDEWEB)
Farmer, J.C.
1997-10-01
An integrated predictive model is being developed to account for the effects of localized environmental conditions in crevices on the initiation and propagation of pits. A deterministic calculation is used to estimate the accumulation of hydrogen ions (pH suppression) in the crevice solution due to the hydrolysis of dissolved metals. Pit initiation and growth within the crevice is then dealt with by either a probabilistic model, or an equivalent deterministic model. Ultimately, the role of intergranular corrosion will have to be considered. While the strategy presented here is very promising, the integrated model is not yet ready for precise quantitative predictions. Empirical expressions for the rate of penetration based upon experimental crevice corrosion data can be used in the interim period, until the integrated model can be refined. Bounding calculations based upon such empirical expressions can provide important insight into worst-case scenarios.
International Nuclear Information System (INIS)
This Letter proposes an algorithm to detect an unknown deterministic signal hidden in additive white Gaussian noise. The detector is based on recurrence analysis. It compares the distribution of the similarity matrix coefficients of the measured signal with an analytic expression of the distribution expected in the noise-only case. This comparison is achieved using divergence measures. Performance analysis based on the receiver operating characteristics shows that the proposed detector outperforms the energy detector, giving a probability of detection 10% to 50% higher, and has a similar performance to that of a sub-optimal filter detector. - Highlights: • We model the distribution of the similarity matrix coefficients of a Gaussian noise. • We use divergence measures for goodness-of-fit test between a model and measured data. • We distinguish deterministic signal and Gaussian noise with similarity matrix analysis. • Similarity matrix analysis outperforms energy detector
Energy Technology Data Exchange (ETDEWEB)
Le Bot, O., E-mail: lebotol@gmail.com [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Mars, J.I. [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Gervaise, C. [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Chaire CHORUS, Foundation of Grenoble Institute of Technology, 46 Avenue Félix Viallet, 38031 Grenoble Cedex 1 (France)
2015-10-23
This Letter proposes an algorithm to detect an unknown deterministic signal hidden in additive white Gaussian noise. The detector is based on recurrence analysis. It compares the distribution of the similarity matrix coefficients of the measured signal with an analytic expression of the distribution expected in the noise-only case. This comparison is achieved using divergence measures. Performance analysis based on the receiver operating characteristics shows that the proposed detector outperforms the energy detector, giving a probability of detection 10% to 50% higher, and has a similar performance to that of a sub-optimal filter detector. - Highlights: • We model the distribution of the similarity matrix coefficients of a Gaussian noise. • We use divergence measures for goodness-of-fit test between a model and measured data. • We distinguish deterministic signal and Gaussian noise with similarity matrix analysis. • Similarity matrix analysis outperforms energy detector.
Minimally Invasive Lumbar Discectomy
Full Text Available ... possible incision to minimize the injury to the tissues, particularly the muscles, the skin, and the ligaments, ... easier, and it limits the damage to the tissues around. So it’s a much safer procedure for ...
Minimally Invasive Lumbar Discectomy
Full Text Available ... neurosurgeon going with that, or should patients be seeking out versed in the minimally invasive procedures? The ... right. 16 Okay. And the studies, do they support that, do they show that quicker recovery times, ...
Ruled Laguerre minimal surfaces
Skopenkov, Mikhail
2011-10-30
A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.
Minimally Invasive Lumbar Discectomy
Full Text Available ... because obviously the surgeon is sort of the person that everybody focuses on. But minimally invasive surgery ... on a regular basis, you should sort of act like a baseball player in spring training. You ...
Hubbard, Guy
2002-01-01
Provides background information on the art movement called "Minimalism" discussing why it started and its characteristics. Includes learning activities and information on the artist, Donald Judd. Includes a reproduction of one of his art works and discusses its content. (CMK)
Minimally Invasive Lumbar Discectomy
Full Text Available ... with the smallest possible incision to minimize the injury to the tissues, particularly the muscles, the skin, ... the world. It’s one of the most common injuries and one of the most common causes of ...
Minimally Invasive Lumbar Discectomy
Full Text Available ... minimize the injury to the tissues, particularly the muscles, the skin, and the ligaments, to get to ... But rather using their backs regularly so the muscles heal with normal movement. Now the traditional discectomy ...
Energy Technology Data Exchange (ETDEWEB)
Peyton, B.W.
1999-07-01
When minimum orderings proved too difficult to deal with, Rose, Tarjan, and Leuker instead studied minimal orderings and how to compute them (Algorithmic aspects of vertex elimination on graphs, SIAM J. Comput., 5:266-283, 1976). This paper introduces an algorithm that is capable of computing much better minimal orderings much more efficiently than the algorithm in Rose et al. The new insight is a way to use certain structures and concepts from modern sparse Cholesky solvers to re-express one of the basic results in Rose et al. The new algorithm begins with any initial ordering and then refines it until a minimal ordering is obtained. it is simple to obtain high-quality low-cost minimal orderings by using fill-reducing heuristic orderings as initial orderings for the algorithm. We examine several such initial orderings in some detail.
Minimally Invasive Lumbar Discectomy
Full Text Available ... because obviously the surgeon is sort of the person that everybody focuses on. But minimally invasive surgery ... can do to safeguard. Probably the biggest risk factor, although we see plenty of these problems in ...
Minimally Invasive Lumbar Discectomy
Full Text Available ... approach. The word that we put in the operative record is minimal. We’re talking about maybe ... we’re looking at 20-times magnification. The operative area, the field that they’re working is ...
Hedlin Hayden, Malin
2003-01-01
The dissertation involves a threefold investigation of sculpture. Firstly, the interpretations are focused on particular artworks by three British sculptors: Antony Gormley (b. 1950), Anish Kapoor (b. 1954), and Rachel Whiteread (b. 1963), respectively. The notion of applied minimalism is tentatively applied to their sculptures. A primary argument is that these works are idiomatically, thematically, and theoretically founded on the heritage of American Minimalism from the 1960s. The sculpture...
Sunstein, Cass Robert
2004-01-01
When national security conflicts with individual liberty, reviewing courts might adopt one of three general orientations: National Security Maximalism, Liberty Maximalism, and minimalism. National Security Maximalism calls for a great deal of deference to the President, above all because of his authority as Commander-in-Chief of the Armed Forces. Liberty Maximalism asks courts to assume the same liberty-protecting posture in times of war as in times of peace. Minimalism asks courts to follow ...
Hazardous waste minimization tracking system
International Nuclear Information System (INIS)
Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required
International Nuclear Information System (INIS)
A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.
Energy Technology Data Exchange (ETDEWEB)
Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr
2009-08-07
A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.
Power Minimization techniques for Networked Data Centers
International Nuclear Information System (INIS)
Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.
Rathod, Ashok K; Dhake, Rakesh P; Pawaskar, Aditya
2016-01-01
Fractures of the proximal tibia comprise a huge spectrum of injuries with different fracture configurations. The combination of tibia plateau fracture with diaphyseal extension is a rare injury with sparse literature being available on treatment of the same. Various treatment modalities can be adopted with the aim of achieving a well-aligned, congruous, stable joint, which allows early motion and function. We report a case of a 40-year-old male who sustained a Schatzker type VI fracture of left tibial plateau with diaphyseal extension. On further investigations, the patient was diagnosed to have diabetes mellitus with grossly deranged blood sugar levels. The depressed tibial condyle was manipulated to lift its articular surface using K-wire as a joystick and stabilized with an additional K-wire. Distal tibial skeletal traction was maintained for three weeks followed by an above knee cast. At eight months of follow-up, X-rays revealed a well-consolidated fracture site, and the patient had attained a reasonably good range of motion with only terminal restriction of squatting. Tibial plateau fractures with diaphyseal extension in a patient with uncontrolled diabetes mellitus is certainly a challenging entity. After an extended search of literature, we could not find any reports highlighting a similar method of treatment for complex tibial plateau injuries in a patient with uncontrolled diabetes mellitus. PMID:27335711
International Nuclear Information System (INIS)
Purpose: Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of today’s treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. Methods: The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf-macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 × 10 cm2 field as well as a smaller 2 × 2 cm2 field. Results: Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithm’s high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization. Conclusions: The
Waste minimization assessment procedure
International Nuclear Information System (INIS)
Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative
Pipe fracture evaluations for leak-rate detection: Deterministic models
International Nuclear Information System (INIS)
Regulatory Guide 1.45, Reactor Coolant Pressure Boundary Leakage Detection Systems was published by Nuclear Regulatory Commission (NRC) in May 1973, and its update is being considered. Updating this procedure can involve accounting for the current leak-detection instrumentation capabilities, experience from the accuracy of leak-detection systems in the past, and current analysis methods to assess the significance of the detectable leakage relative to the structural integrity of the plant. In this study, a three-phase effort was undertaken to conduct circumferentially cracked pipe fracture evaluations for applications to leak-rate detection requirement. Results from these probabilistic analyses can be used as a technical basis for future changes to leak-rate detection criterion. In this paper, a state-of-the-art review was conducted to evaluate the adequacy of current deterministic models for thermal-hydraulic analysis for estimation of leak rates, crack-opening area analysis for determination of crack geometry, and elastic-plastic fracture mechanics for prediction of maximum load-carrying capacity of circumferentially cracked piping systems (Phase 1). The results predicted from the above deterministic models were compared with experimental data obtained from the past NRC research programs. Based on the comparisons, it was concluded that the models considered in this study provide reasonably accurate estimates of leak rates, area of crack opening, and maximum load-carrying capacity of circumferentially cracked pipes. These validated deterministic models will be used for subsequent development of novel probabilistic models to evaluate structural reliability of degraded piping systems (Phase 2). Using these models, stochastic pipe fracture evaluation will be conducted for applications to leak-rate detection of piping in boiling water reactor and pressurized water reactor plants (Phase 3)
Oscillation and chaos in a deterministic traffic network
International Nuclear Information System (INIS)
Traffic dynamics of regular networks are of importance in theory and practice. In this paper, we study such a problem with a regular lattice structure. We specify the network structure and traffic protocols so that all the random features are removed. When a node is attacked and then removed, the traffic redistributes, causing complicated dynamical results. With different system redundancy, we observe rich dynamics, ranging from stable state to periodic to chaotic oscillation. Since this is a completely deterministic system, we can conclude that the nonlinear dynamics is purely due to the interior nonlinear feature of the traffic.
Deterministic superresolution with coherent states at the shot noise limit
DEFF Research Database (Denmark)
Distante, Emanuele; Jezek, Miroslav; Andersen, Ulrik L.
2013-01-01
detection approaches. Here we show that superresolving phase measurements at the shot noise limit can be achieved without resorting to nonclassical optical states or to low-efficiency detection processes. Using robust coherent states of light, high-efficiency homodyne detection, and a deterministic...... binarization processing technique, we show a narrowing of the interference fringes that scales with 1/√N where N is the mean number of photons of the coherent state. Experimentally we demonstrate a 12-fold narrowing at the shot noise limit....
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Deterministic versus stochastic aspects of superexponential population growth models
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
HPC challenges for deterministic neutronics simulations using APOLLO3 code
International Nuclear Information System (INIS)
The aim of this paper is to present some major HPC challenges for deterministic neutronics simulations and how these challenges are addressed in the APOLLO3 code. Different levels of HPC are illustrated on different kind of applications and parallel paradigms techniques in the frame of the APOLLO3 code. Results obtained for fuel load management using genetic algorithm, domain decomposition for transport solvers, GPU acceleration for the Boltzmann equation solution are given using from few cores to massively parallel computing using more than 10000 cores. (author)
Deterministic Single-Phonon Source Triggered by a Single Photon
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-01
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Analysis of the Deterministic (s, S) Inventory Problem
Iyer, Ananth V.; Linus E. Schrage
1992-01-01
The traditional or textbook approach for finding an (s, S) inventory policy is to take a demand distribution as given and then find a reorder point s and order up to point S that are optimal for this demand distribution. In reality, the demand distribution may have been obtained by fitting it to some historical demand stream. In contrast, the deterministic (s, S) inventory problem is to directly determine the (s, S) pair that would have been optimal for the original demand stream, bypassing t...
Deterministic and Stochastic Models of Dynamics of Chemical Systems
Czech Academy of Sciences Publication Activity Database
Vejchodský, Tomáš; Erban, R.
Praha : Institute of Mathematics Academy of Sciences of the Czech Republic, 2008 - (Chleboun, J.; Přikryl, P.; Segeth, K.; Vejchodský, T.), s. 220-225 ISBN 978-80-85823-55-4. [Programs and Algorithms of Numerical Mathematics /14./. Dolní Maxov (CZ), 01.06.2008-06.06.2008] R&D Projects: GA AV ČR IAA100760702; GA ČR(CZ) GA102/07/0496 Institutional research plan: CEZ:AV0Z10190503 Keywords : deterministic model * stochastic model * Fokker-Planck equation Subject RIV: BA - General Mathematics
Deterministic Single-Phonon Source Triggered by a Single Photon
Söllner, Immo; Lodahl, Peter
2016-01-01
We propose a scheme that enables the deterministic generation of single phonons at GHz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on-chip in an opto-mechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new opto-mechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nano-fabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Deterministic approximation for the cover time of trees
Feige, Uriel
2009-01-01
We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n/epsilon, and hence remains polynomial in n also for epsilon = 1/n^{O(1)}. We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.
Deterministic controlled remote state preparation using partially entangled quantum channel
Chen, Na; Quan, Dong Xiao; Yang, Hong; Pei, Chang Xing
2016-04-01
In this paper, we propose a novel scheme for deterministic controlled remote state preparation (CRSP) of arbitrary two-qubit states. Suitably chosen partially entangled state is used as the quantum channel. With proper projective measurements carried out by the sender and controller, the receiver can reconstruct the target state by means of appropriate unitary operation. Unit success probability can be achieved for arbitrary two-qubit states. Different from some previous CRSP schemes utilizing partially entangled channels, auxiliary qubit is not required in our scheme. We also show that the success probability is independent of the parameters of the partially entangled quantum channel.
Deterministic multimode photonic device for quantum-information processing
DEFF Research Database (Denmark)
Nielsen, Anne Ersbak Bang; Mølmer, Klaus
2010-01-01
excitation to optically excited levels followed by cooperative spontaneous emission. Among our examples of applications, we demonstrate how two-photon-entangled states can be prepared and implemented in a protocol for a reference-frame-free quantum key distribution and how one-dimensional as well as higher......We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by...
Steering Multiple Reverse Current into Unidirectional Current in Deterministic Ratchets
Institute of Scientific and Technical Information of China (English)
韦笃取; 罗晓曙; 覃英华
2011-01-01
Recent investigations have shown that with varying the amplitude of the external force, the deterministic ratchets exhibit multiple current reversals, which are undesirable in certain circumstances. To control the multiple reverse current to unidirectional current, an adaptive control law is presented inspired from the relation between multiple reversaJs current and the chaos-periodic/quasiperiodic transition of the transport velocity. The designed controller can stabilize the transport velocity of ratchets to steady state and suppress any chaos-periodic/quasiperiodic transition, namely, the stable transport in ratchets is achieved, which makes the current sign unchanged.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
Deterministic secure quantum communication over a collective-noise channel
Institute of Scientific and Technical Information of China (English)
GU Bin; PEI ShiXin; SONG Biao; ZHONG Kun
2009-01-01
We present two deterministic secure quantum communication schemes over a collective-noise. One is used to complete the secure quantum communication against a collective-rotation noise and the other is used against a collective-dephasing noise. The two parties of quantum communication can exploit the correlation of their subsystems to check eavesdropping efficiently. Although the sender should prepare a sequence of three-photon entangled states for accomplishing secure communication against a collective noise, the two parties need only single-photon measurements, rather than Bell-state measurements, which will make our schemes convenient in practical application.
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)
Opinion Formation on a Deterministic Pseudo-fractal Network
Gonzalez, M C; Sousa, A. O.; Herrmann, H. J.
2003-01-01
The Sznajd model of socio-physics, that only a group of people sharing the same opinion can convince their neighbors, is applied to a scale-free random network modeled by a deterministic graph. We also study a model for elections based on the Sznajd model and the exponent obtained for the distribution of votes during the transient agrees with those obtained for real elections in Brazil and India. Our results are compared to those obtained using a Barabasi-Albert scale-free network.
International Nuclear Information System (INIS)
We propose minimal gaugino mediation as the simplest known solution to the supersymmetric flavor and CP problems. The framework predicts a very minimal structure for the soft parameters at ultrahigh energies: gaugino masses are unified and non-vanishing whereas all other soft supersymmetry breaking parameters vanish. We show that this boundary condition naturally arises from a small extra dimension and present a complete model which includes a new extra-dimensional solution to the μ problem. We briefly discuss the predicted superpartner spectrum as a function of the two parameters of the model. The commonly ignored renormalization group evolution above the GUT scale is crucial to the viability of minimal gaugino mediation but does not introduce new model dependence
International Nuclear Information System (INIS)
The authors propose Minimal Gaugino Mediation as the simplest known solution to the supersymmetric flavor and CP problems. The framework predicts a very minimal structure for the soft parameters at ultra-high energies: gaugino masses are unified and non-vanishing whereas all other soft supersymmetry breaking parameters vanish. The authors show that this boundary condition naturally arises from a small extra dimension and present a complete model which includes a new extra-dimensional solution to the mu problem. The authors briefly discuss the predicted superpartner spectrum as a function of the two parameters of the model. The commonly ignored renormalization group evolution above the GUT scale is crucial to the viability of Minimal Gaugino Mediation but does not introduce new model dependence
Sadek, Mohammad
2010-01-01
In this paper we consider genus one equations of degree $n$, namely a (generalised) binary quartic when $n=2$, a ternary cubic when $n=3$, and a pair of quaternary quadrics when $n=4$. A new definition for the minimality of genus one equations of degree $n$ over local fields is introduced. The advantage of this definition is that it does not depend on invariant theory of genus one curves. We prove that this definition coincides with the classical definition of minimality for all $n\\le4$. As a...
Minimalism and Speakers’ Intuitions
Directory of Open Access Journals (Sweden)
Matías Gariazzo
2011-08-01
Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.
Seismic hazard in Romania associated to Vrancea subcrustal source: Deterministic evaluation
International Nuclear Information System (INIS)
Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the more efficient high-frequency radiation, especially in the case of large earthquakes, in comparison with shallow earthquakes of similar size. The seismic hazard is computed in terms of peak ground motion values characterizing the complete synthetic seismograms generated by the modal summation technique on a grid covering the Romanian territory. Two representative scenario earthquakes are considered in the computation, corresponding to the largest instrumentally recorded earthquakes, one located in the upper part of the slab (Mw = 7.4; h = 90 km), the other located in the lower part of the slab (Mw = 7.7; h = 150 km). The seismic hazard distribution, expressed in terms of Design Ground Acceleration values, is very sensitive to magnitude, focal depth and focal mechanism. For a variation of 0.3 magnitude units the hazard level generally increases by a factor of two. The increase of the focal depth leads to stronger radiation at large epicentral distance (100 - 200
New developments in virtual X-ray imaging: Fast simulation using a deterministic approach
International Nuclear Information System (INIS)
The simulation code named VXI ('Virtual X-ray Imaging') has been considerably improved since the issue of its first version, two years ago. Emphasis was particularly laid on devising faster (dedicated and optimized) ray tracing algorithms. At present, the computation time needed to simulate a transmitted image (i.e. formed by uncollided photons), with a polychromatic beam, complex sample geometry and high detector resolution, is typically in the range 0.01-10 s, with a simple personal computer (1 GHz microprocessor). This feature enables real-time simulation of radioscopy and tomography setups. We have recently tackled the simulation of Rayleigh and Compton scattering using the same deterministic ray tracing approach. The performance of our code in the case of transmission images as well as preliminary results of the simulation of first-order scattering are presented in this paper
Deterministic Partial Differential Equation Model for Dose Calculation in Electron Radiotherapy
Duclous, Roland; Frank, Martin
2009-01-01
Treatment with high energy ionizing radiation is one of the main methods in modern cancer therapy that is in clinical use. During the last decades, two main approaches to dose calculation were used, Monte Carlo simulations and semi-empirical models based on Fermi-Eyges theory. A third way to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. Starting from these, we derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free-streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on [BerCharDub], that exactly preserves key properties of the analytical solution on the discrete level. Several numerical results for test cases from the medical physics literature are presented.
Method and application for internal flooding deterministic safety assessment of nuclear power plant
International Nuclear Information System (INIS)
In order to ensure the safety functions of nuclear power plants in case of internal flooding, it is necessary to consider protection against internal flooding in nuclear power plant design and evaluate the consequence to verify that the target of internal flooding protection can be achieved. According to the study on internal flooding protection purposes, requirements and protection measures, assumptions, methods and steps of internal flooding deterministic safety assessment were sought and presented. Based on a boric acid transportation room and reactor cavity and spent fuel pool cooling and treatment (PTR) pump room of 1000 MWe nuclear power plant, the method was applied and verified. The analysis result shows that the boric acid transportation room doesn't need any protections, but the PTR pump room needs to take drain measures in order to perform the nuclear safety functions. (authors)
Deterministic numerical methods for eigenvalue problems in neutron linear transport theory
International Nuclear Information System (INIS)
Conventional deterministic numerical methods applied to eigenvalue problems in neutron transport theory in the discrete ordinates (SN) formulation are described. In order to architect an efficient time-dependent simulator for thermal reactor cores, for the cases where diffusion theory fails to give good results, the accuracy of a spectral nodal method applied to a simplified time-dependent transport model is investigated. Moreover, in order to generate the initial conditions for the one-dimensional kinetics problems, considered in the CINUNI section section of the simulator, a numerical method for SN eigenvalue problems with no spatial truncation errors is introduced. The convergence of the outer iterations with the Tchebycheff technique and implemented the option of albedo boundary conditions has been accelerated. These albedo boundary exactly substitute the top and bottom reflector regions. (author)
Pest persistence and eradication conditions in a deterministic model for sterile insect release.
Gordillo, Luis F
2015-01-01
The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population. PMID:25105593
Evaluating consistency of deterministic streamline tractography in non-linearly warped DTI data
Adluru, Nagesh; Tromp, Do P M; Davidson, Richard J; Zhang, Hui; Alexander, Andrew L
2016-01-01
Tractography is typically performed for each subject using the diffusion tensor imaging (DTI) data in its native subject space rather than in some space common to the entire study cohort. Despite performing tractography on a population average in a normalized space, the latter is considered less favorably at the \\emph{individual} subject level because it requires spatial transformations of DTI data that involve non-linear warping and reorientation of the tensors. Although the commonly used reorientation strategies such as finite strain and preservation of principle direction are expected to result in adequate accuracy for voxel based analyses of DTI measures such as fractional anisotropy (FA), mean diffusivity (MD), the reorientations are not always exact except in the case of rigid transformations. Small imperfections in reorientation at individual voxel level accumulate and could potentially affect the tractography results adversely. This study aims to evaluate and compare deterministic white matter fiber t...
Minimal Projections with respect to Numerical Radius
Aksoy, Asuman G.; Lewicki, Grzegorz
2014-01-01
In this paper we survey some results on minimality of projections with respect to numerical radius. We note that in the cases $L^p$, $p=1,2,\\infty$, there is no difference between the minimality of projections measured either with respect to operator norm or with respect to numerical radius. However, we give an example of a projection from $l^p_3$ onto a two-dimensional subspace which is minimal with respect to norm, but not with respect to numerical radius for $p\
Electrocardiogram (ECG) pattern modeling and recognition via deterministic learning
Institute of Scientific and Technical Information of China (English)
Xunde DONG; Cong WANG; Junmin HU; Shanxing OU
2014-01-01
A method for electrocardiogram (ECG) pattern modeling and recognition via deterministic learning theory is presented in this paper. Instead of recognizing ECG signals beat-to-beat, each ECG signal which contains a number of heartbeats is recognized. The method is based entirely on the temporal features (i.e., the dynamics) of ECG patterns, which contains complete information of ECG patterns. A dynamical model is employed to demonstrate the method, which is capable of generating synthetic ECG signals. Based on the dynamical model, the method is shown in the following two phases:the identification (training) phase and the recognition (test) phase. In the identification phase, the dynamics of ECG patterns is accurately modeled and expressed as constant RBF neural weights through the deterministic learning. In the recognition phase, the modeling results are used for ECG pattern recognition. The main feature of the proposed method is that the dynamics of ECG patterns is accurately modeled and is used for ECG pattern recognition. Experimental studies using the Physikalisch-Technische Bundesanstalt (PTB) database are included to demonstrate the effectiveness of the approach.
Deterministic chaos in the X-Ray sources
Grzedzielski, M; Janiuk, A
2015-01-01
Hardly any of the observed black hole accretion disks in X-Ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the Quasi-Periodic Oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binaries vari- ability. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the sur- rogate series. We analyze here the data of two X-Ray binaries - XTE J1550-564, and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leadin...
Deterministic Chaos in the X-ray Sources
Grzedzielski, M.; Sukova, P.; Janiuk, A.
2015-12-01
Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries - XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixed-point solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space–time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
A local deterministic model of quantum spin measurement
Palmer, T N
1995-01-01
The conventional view, that Einstein was wrong to believe that quantum physics is local and deterministic, is challenged. A parametrised model, Q, for the state vector evolution of spin 1/2 particles during measurement is developed. Q draws on recent work on so-called riddled basins in dynamical systems theory, and is local, deterministic, nonlinear and time asymmetric. Moreover the evolution of the state vector to one of two chaotic attractors (taken to represent observed spin states) is effectively uncomputable. Motivation for this model arises from Penrose's speculations about the nature and role of quantum gravity. Although the evolution of Q's state vector is uncomputable, the probability that the system will evolve to one of the two attractors is computable. These probabilities correspond quantitatively to the statistics of spin 1/2 particles. In an ensemble sense the evolution of the state vector towards an attractor can be described by a diffusive random walk. Bell's theorem and a version of the Bell-...
Safe shutdown earthquake loading: deterministic and probabilistic evaluations
International Nuclear Information System (INIS)
Some probabilistic approaches which determine Safe Shutdown Earthquake (SSE), based upon its expected recurrence period at the site, have been proposed recently. Mainly, two obstacles arise in the use of the probabilistic procedures for SSE determination. Often, statistically insufficient data are available on the past earthquakes and assumptions for the following critical input data required for probabilistic SSE estimation have not yet been studied thoroughly: 1. Earthquake Recurrence Rate versus Intensity relationships. 2. Maximum Intensity Earthquakes to be considered in the analysis. 3. Choice of recurrence period for the basis of SSE. 4. Interpretation of seismic activity distribution in the near vicinity of the site. In addition, probabilistic as well as deterministic results depend upon the following factors: 5. Earthqake Ground Motion Attenuation Characteristics. 6. Definition of Seismotectonic Provinces (areas of equipotential seismicity) and major mapped faults. This paper comprehensively studies the implications of various assumptions regarding the above inputs to the SSE detemination and documents rational procedures to account for these assumptions. Refining the probabilistic procedures so that they can be rationally and realistically used in SSE determination, this presentation provides an understanding of similarities and differences in results obtained from deterministic and probabilistic procedures. Possible pitfalls in both methods are discussed and an outline for future research is suggested. (Auth.)
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
International Nuclear Information System (INIS)
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Quantum secure direct communication and deterministic secure quantum communication
Institute of Scientific and Technical Information of China (English)
LONG Gui-lu; DENG Fu-guo; WANG Chuan; LI Xi-han; WEN Kai; WANG Wan-ying
2007-01-01
In this review article,we review the recent development of quantum secure direct communication(QSDC)and deterministic secure quantum communication(DSQC) which both are used to transmit secret message,including the criteria for QSDC,some interesting QSDC protocols,the DSQC protocols and QSDC network,etc.The difference between these two branches of quantum Communication is that DSOC requires the two parties exchange at least one bit of classical information for reading out the message in each qubit,and QSDC does not.They are attractivebecause they are deterministic,in particular,the QSDC protocol is fully quantum mechanical.With sophisticated quantum technology in the future,the QSDC may become more and more popular.For ensuring the safety of QSDC with single photons and quantum information sharing of single qubit in a noisy channel,a quantum privacy amplification protocol has been proposed.It involves very simple CHC operations and reduces the information leakage to a negligible small level.Moreover,with the one-party quantum error correction,a relation has been established between classical linear codes and quantum one-party codes,hence it is convenient to transfer many good classical error correction codes to the quantum world.The one-party quantum error correction codes are especially designed for quantum dense coding and related QSDC protocols based on dense coding.
Deterministic approach to microscopic three-phase traffic theory
Kerner, B S; Kerner, Boris S.; Klenov, Sergey L.
2005-01-01
A deterministic approach to three-phase traffic theory is presented. Two different deterministic microscopic traffic flow models are introduced. In an acceleration time delay model (ATD-model), different time delays in driver acceleration associated with driver behavior in various local driving situations are explicitly incorporated into the model. Vehicle acceleration depends on local traffic situation, i.e., whether a driver is within the free flow, or synchronized flow, or else wide moving jam traffic phase. In a speed adaptation model (SA-model), driver time delays are simulated as a model effect: Rather than driver acceleration, vehicle speed adaptation occurs with different time delays depending on one of the three traffic phases in which the vehicle is in. It is found that the ATD- and SA-models show spatiotemporal congested traffic patterns that are adequate with empirical results. It is shown that in accordance with empirical results in the ATD- and SA-models the onset of congestion in free flow at a...
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Frentz, Zak; Kuehn, Seppe; Leibler, Stanislas
2015-10-01
Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES) as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Directory of Open Access Journals (Sweden)
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Developments based on stochastic and determinist methods for studying complex nuclear systems
International Nuclear Information System (INIS)
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Deterministic and stochastic transport theories for the analysis of complex nuclear systems
International Nuclear Information System (INIS)
In the field of reactor and fuel cycle physics, particle transport plays an important role. Neutronic design, operation and evaluation calculations of nuclear systems make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very sensitive to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared
International Nuclear Information System (INIS)
The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs
Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared
Energy Technology Data Exchange (ETDEWEB)
Chouhan, S.L.; Davis, P
2002-04-01
The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs
Minimally Invasive Lumbar Discectomy
Full Text Available ... called a “minimally invasive microscopic lumbar discectomy.” Now this is a patient who a 46-year-old ... L-5, S-1. So that’s why she’s having this procedure. The man who is doing the procedure ...
Minimally Invasive Lumbar Discectomy
Full Text Available ... part of the sciatic nerve. You know one good important thing to talk about is the concept of “I ... in the minimally invasive procedures? The most important thing is to have a good trusting relationship between your surgeon and yourself, and ...
Hellinger, Michael D.; Al Haddad, Abdullah
2008-01-01
Traditionally, stoma creation and end stoma reversal have been performed via a laparotomy incision. However, in many situations, stoma construction may be safely performed in a minimally invasive nature. This may include a trephine, laparoscopic, or combined approach. Furthermore, Hartmann's colostomy reversal, a procedure traditionally associated with substantial morbidity, may also be performed laparoscopically. The authors briefly review patient selection, preparation, and indications, and...
DEFF Research Database (Denmark)
David, Alexandre; Håkansson, John; G. Larsen, Kim; Pettersson, Paul
In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...
DEFF Research Database (Denmark)
Frandsen, Mads Toudal
2007-01-01
I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from the...
Minimally Invasive Lumbar Discectomy
Full Text Available ... Miami’s Baptist Hospital. You’re going to be a seeing a procedure called a “minimally invasive microscopic lumbar discectomy.” Now this is a patient who a 46-year-old woman who ...
Logarithmic Superconformal Minimal Models
Pearce, Paul A; Tartaglia, Elena
2013-01-01
The higher fusion level logarithmic minimal models LM(P,P';n) have recently been constructed as the diagonal GKO cosets (A_1^{(1)})_k oplus (A_1^{(1)})_n / (A_1^{(1)})_{k+n} where n>0 is an integer fusion level and k=nP/(P'-P)-2 is a fractional level. For n=1, these are the logarithmic minimal models LM(P,P'). For n>1, we argue that these critical theories are realized on the lattice by n x n fusion of the n=1 models. For n=2, we call them logarithmic superconformal minimal models LSM(p,p') where P=|2p-p'|, P'=p' and p,p' are coprime, and they share the central charges of the rational superconformal minimal models SM(P,P'). Their mathematical description entails the fused planar Temperley-Lieb algebra which is a spin-1 BMW tangle algebra with loop fugacity beta_2=x^2+1+x^{-2} and twist omega=x^4 where x=e^{i(p'-p)pi/p'}. Examples are superconformal dense polymers LSM(2,3) with c=-5/2, beta_2=0 and superconformal percolation LSM(3,4) with c=0, beta_2=1. We calculate the free energies analytically. By numerical...
Energy Technology Data Exchange (ETDEWEB)
Stollenwerk, Stefan
2013-04-01
Within this work, two different novel modelling strategies for deterministic stresses are presented. They include unsteady effects in steady mixing-plane simulations and constitute an improvement of the conventional steady approach. Whilst the deterministic flux model is based on preliminary unsteady simulations, the transport model for deterministic stresses constitutes a stand-alone approach. Initially, important unsteady effects are illustrated followed by an analysis of their impact on the time averaged solution. After the derivation of the deterministic stress models they are applied to a real transonic compressor. At this, the deterministic model results are compared to those obtained by established RANS and URANS approaches as well as to experimental results. The deterministic stress models show strongly improved results compared to the conventional mixing-plane approach.
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)
International Nuclear Information System (INIS)
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References
Anti-deterministic behavior of discrete systems that are less predictable than noise
Urbanowicz, Krzysztof; Kantz, Holger; Janusz A. HOLYST
2004-01-01
We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in Recurrence Plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact less pr...
Minimal Modification to Tri-bimaximal Mixing
He, Xiao-Gang
2011-01-01
We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements $V_{\\alpha 2}$ of the second column in the mixing matrix equal to $1/\\sqrt{3}$. Modifications by keeping one of the columns or one of the rows unchanged from tri-bimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K collaboration has recently reported indications of a non-zero $\\theta_{13}$. For the cases we consider, if we impose the T2K result as stated, the CP violating phase angle $\\delta$ is sharply constrained.
Deterministic and probabilistic methods to assess the safety level of operating nuclear power plants
International Nuclear Information System (INIS)
The German safety concept for nuclear power plants gives priority to the deterministic approach, i.e. deterministic analysis and good engineering judgement are primary tools of design evaluation. Probabilistic safety assessment is seen as a supplementary tool to the deterministic approach which provides quantitative information on the occurrence of incidents and thus can be used to check deterministic design assumptions, to evaluate desired plant and system modifications, to optimize backfitting measures and to quantify existing safety margins of operating nuclear power plants, e.g. in the frame of periodic safety reviews.(author)
STRONGLY MINIMAL g* -CONTINUOUS MAPS AND STRONGLY MINIMAL g** -CONTINUOUS MAPS IN MINIMAL SPACES
E.Subha,; A.Pushpalatha
2010-01-01
In this paper, we introduce and study the concepts of a new class of maps, namely strongly minimal continuous maps, strongly minimal g*-continuous maps, strongly minimal g**-continuous maps which includes the class of continuous maps.
Cai, Yi
2016-01-01
Incorporating neutrino mass generation and a dark matter candidate in a unified model has always been intriguing. We present the minimal model to realize the dual-task procedure based on the one-loop ultraviolet completion of the Weinberg operator, in the framework of minimal dark matter and radiative neutrino mass generation. In addition to the Standard Model particles, the model consists of a real scalar quintuplet, a pair of vector-like quadruplet fermions and a fermionic quintuplet. The neutral component of the fermionic quintuplet serves as a good dark matter candidate which can be tested by the future direct and indirect detection experiments. The constraints from flavor physics and electroweak-scale naturalness are also discussed.
Minimalism and speakers’ intuitions
Matías Gariazzo
2012-01-01
Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclu...
Minimally Invasive Thoracic Surgery
McFadden, P. Michael
2000-01-01
To reduce the risk, trauma, and expense of intrathoracic surgical treatments, minimally invasive procedures performed with the assistance of fiberoptic video technology have been developed for thoracic and bronchial surgeries. The surgical treatment of nearly every intrathoracic condition can benefit from a video-assisted approach performed through a few small incisions. Video-assisted thoracoscopic and rigid-bronchoscopic surgery have improved the results of thoracic procedures by decreasing...
DEFF Research Database (Denmark)
Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco
2011-01-01
We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the...... underlying dynamics is preferred to be near conformal. We discover that the compositeness scale of inflation is of the order of the grand unified energy scale....
Improving Connectionist Energy Minimization
Pinkas, G.; Dechter, R
1995-01-01
Symmetric networks designed for energy minimization such as Boltzman machines and Hopfield nets are frequently investigated for use in optimization, constraint satisfaction and approximation of NP-hard problems. Nevertheless, finding a global solution (i.e., a global minimum for the energy function) is not guaranteed and even a local solution may take an exponential number of steps. We propose an improvement to the standard local activation function used for such networks. The improved algori...
International Nuclear Information System (INIS)
Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ∼ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Levy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.
Quantum dissonance and deterministic quantum computation with a single qubit
Ali, Mazhar
2014-11-01
Mixed state quantum computation can perform certain tasks which are believed to be efficiently intractable on a classical computer. For a specific model of mixed state quantum computation, namely, deterministic quantum computation with a single qubit (DQC1), recent investigations suggest that quantum correlations other than entanglement might be responsible for the power of DQC1 model. However, strictly speaking, the role of entanglement in this model of computation was not entirely clear. We provide conclusive evidence that there are instances where quantum entanglement is not present in any part of this model, nevertheless we have advantage over classical computation. This establishes the fact that quantum dissonance (a kind of quantum correlations) present in fully separable (FS) states provide power to DQC1 model.
Deterministic simulation of thermal neutron radiography and tomography
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
Continuous Tempering Molecular Dynamics: A Deterministic Approach to Simulated Tempering.
Lenner, Nicolas; Mathias, Gerald
2016-02-01
Continuous tempering molecular dynamics (CTMD) generalizes simulated tempering (ST) to a continuous temperature space. Opposed to ST the CTMD equations of motion are fully deterministic and feature a conserved quantity that can be used to validate the simulation. Three variants of CTMD are discussed and compared by means of a simple test system. The implementation features of the most stable and simplest variant CTMD-[Formula: see text] in the program package Iphigenie are described. Two applications - alanine dipeptide (Ac-Ala-NHMe) in explicit water and octa-alanine (Ac-(Ala)8-NHMe) simulated in a dielectric continuum - demonstrate the functionality of CTMD-[Formula: see text]. Furthermore, they serve to evaluate its sampling efficiency. Here, CTMD-[Formula: see text] outperforms ST by 35% and replica exchange even by 75%. PMID:26760910
Receding Horizon Temporal Logic Control for Finite Deterministic Systems
Ding, Xuchu; Belta, Calin
2012-01-01
This paper considers receding horizon control of finite deterministic systems, which must satisfy a high level, rich specification expressed as a linear temporal logic formula. Under the assumption that time-varying rewards are associated with states of the system and they can be observed in real-time, the control objective is to maximize the collected reward while satisfying the high level task specification. In order to properly react to the changing rewards, a controller synthesis framework inspired by model predictive control is proposed, where the rewards are locally optimized at each time-step over a finite horizon, and the immediate optimal control is applied. By enforcing appropriate constraints, the infinite trajectory produced by the controller is guaranteed to satisfy the desired temporal logic formula. Simulation results demonstrate the effectiveness of the approach.
Scaling mobility patterns and collective movements: Deterministic walks in lattices
Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2011-05-01
Scaling mobility patterns have been widely observed for animals. In this paper, we propose a deterministic walk model to understand the scaling mobility patterns, where walkers take the least-action walks on a lattice landscape and prey. Scaling laws in the displacement distribution emerge when the amount of prey resource approaches the critical point. Around the critical point, our model generates ordered collective movements of walkers with a quasiperiodic synchronization of walkers’ directions. These results indicate that the coevolution of walkers’ least-action behavior and the landscape could be a potential origin of not only the individual scaling mobility patterns but also the flocks of animals. Our findings provide a bridge to connect the individual scaling mobility patterns and the ordered collective movements.
Deterministic list codes for state-constrained arbitrarily varying channels
Sarwate, Anand D
2007-01-01
The capacity for the discrete memoryless arbitrarily varying channel (AVC) with cost constraints on the jammer is studied using deterministic list codes under both the maximal and average probability of error criteria. For a cost function $l(\\cdot)$ on the state set and constraint $\\Lambda$ on the jammer, the achievable rates are upper bounded by the random coding capacity $C_r(\\Lambda)$. For maximal error, the rate $R = C_r(\\Lambda) - \\epsilon$ is achievable using list codes with list size $O(\\epsilon^{-1})$. For average error, an integer $\\lsym(\\Lambda)$, called the \\textit{symmetrizability}, is defined. It is shown that any rate below $C_r(\\Lambda)$ is achievable under average error using list codes of list size $L > \\lsym$. An example is given for a class of discrete additive AVCs.
Linearly Bounded Liars, Adaptive Covering Codes, and Deterministic Random Walks
Cooper, Joshua N
2009-01-01
We analyze a deterministic form of the random walk on the integer line called the {\\em liar machine}, similar to the rotor-router model, finding asymptotically tight pointwise and interval discrepancy bounds versus random walk. This provides an improvement in the best-known winning strategies in the binary symmetric pathological liar game with a linear fraction of responses allowed to be lies. Equivalently, this proves the existence of adaptive binary block covering codes with block length $n$, covering radius $\\leq fn$ for $f\\in(0,1/2)$, and cardinality $O(\\sqrt{\\log \\log n}/(1-2f))$ times the sphere bound $2^n/\\binom{n}{\\leq \\lfloor fn\\rfloor}$.
Scaling Mobility Patterns and Collective Movements: Deterministic Walks in Lattices
Han, Xiao-Pu; Wang, Bing-Hong
2010-01-01
Scaling mobility patterns have been widely observed for animals. In this paper, we propose a deterministic walk model to understand the scaling mobility patterns, where walkers take the least-action walks on a lattice landscape and prey. Scaling laws in the displacement distribution emerge when the amount of prey resource approaches the critical point. Around the critical point, our model generates ordered collective movements of walkers with a quasi-periodic synchronization of walkers' directions. These results indicate that the co-evolution of walkers' least-action behavior and the landscape could be a potential origin of not only the individual scaling mobility patterns, but also the flocks of animals. Our findings provide a bridge to connect the individual scaling mobility patterns and the ordered collective movements.
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Location deterministic biosensing from quantum-dot-nanowire assemblies.
Liu, Chao; Kim, Kwanoh; Fan, D L
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Location deterministic biosensing from quantum-dot-nanowire assemblies
Energy Technology Data Exchange (ETDEWEB)
Liu, Chao [Materials Science and Engineering Program, Texas Materials Institute, University of Texas at Austin, Austin, Texas 78712 (United States); Kim, Kwanoh [Department of Mechanical Engineering, University of Texas at Austin, Austin, Texas 78712 (United States); Fan, D. L., E-mail: dfan@austin.utexas.edu [Materials Science and Engineering Program, Texas Materials Institute, University of Texas at Austin, Austin, Texas 78712 (United States); Department of Mechanical Engineering, University of Texas at Austin, Austin, Texas 78712 (United States)
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Location deterministic biosensing from quantum-dot-nanowire assemblies
International Nuclear Information System (INIS)
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
International Nuclear Information System (INIS)
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Quantization of discrete deterministic theories by Hilbert space extension
International Nuclear Information System (INIS)
Quanitzation of a theory usually implies that it is being replaced by a physically different system. In this paper it is pointed out that if a deterministic theory is completely discrete, such as a classical gauge theory on a lattice, with discrete gauge group, then there is an essentially trivial procedureto quantize it. The equations for the evolution of the physica variables are kept unchanged, but are reformulated in terms of the evolution of vectors in a Hilbert space. This transition turns a system into a conventional quantum theory, which may have more symmetries than can be seen in the original classical theory. This is illustrated in a cellular automaton, of which only the quantum version is time-reversal symmetric. Another automaton shows self-duality only after Hilbert space extension. We discuss the importance of such observations for physics. The procedure can also be used to construct a completely finite and soluble quantum gravity model in 2+1 dimensions. (orig.)
Deterministic Squeezed States with Collective Measurements and Feedback
Cox, Kevin C.; Greve, Graham P.; Weiner, Joshua M.; Thompson, James K.
2016-03-01
We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N =5 ×104 laser-cooled 87Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms—comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N =4 ×105 atoms . This is one of the largest reported entanglement enhancements to date in any system.
Deterministic Squeezed States with Joint Measurements and Feedback
Cox, Kevin C; Weiner, Joshua M; Thompson, James K
2015-01-01
We demonstrate the creation of entangled or spin-squeezed states using a joint measurement and real-time feedback. The pseudo-spin state of an ensemble of $N= 5\\times 10^4$ laser-cooled $^{87}$Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) (7.4(6) dB) in variance below the standard quantum limit for unentangled atoms -- comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint pre-measurement, we directly observe up to 59(8) times (17.7(6) dB) improvement in quantum phase variance relative to the standard quantum limit for $N=4\\times 10^5$ atoms. This is the largest reported entanglement enhancement to date in any system.
The Dynamics of Deterministic Chaos in Numerical Weather Prediction Models
Selvam, A M
2003-01-01
Atmospheric weather systems are coherent structures consisting of discrete cloud cells forming patterns of rows/streets, mesoscale clusters and spiral bands which maintain their identity for the duration of their appreciable life times in the turbulent shear flow of the planetary Atmospheric Boundary Layer. The existence of coherent structures (seemingly systematic motion) in turbulent flows has been well established during the last 20 years of research in turbulence. Numerical weather prediction models based on the inherently non-linear Navier-Stokes equations do not give realistic forecasts because of the following inherent limitations: (1) the non-linear governing equations for atmospheric flows do not have exact analytic solutions and being sensitive to initial conditions give chaotic solutions characteristic of deterministic chaos (2) the governing equations do not incorporate the dynamical interactions and co-existence of the complete spectrum of turbulent fluctuations which form an integral part of the...
Confidential Deterministic Quantum Communication Using Three Quantum States
Directory of Open Access Journals (Sweden)
Piotr ZAWADZKI
2011-11-01
Full Text Available A secure quantum deterministic communication protocol is described. The protocol is based on transmission of quantum states from unbiased bases and exploits no entanglement. It is composed form two main components: a quantum quasi secure quantum communication supported by a suitable classical message preprocessing layer. Contrary to many others propositions, it does not require large quantum registers. A security level comparable to classic block ciphers is achieved by a specially designed, purely classic, message pre- and post-processing. However, unlike to the classic communication, no key agreement is required. The protocol is also designed in such a way, that noise in the quantum channel works in advantage to legitimate users improving the security level of the communication.
Deterministic Squeezed States with Collective Measurements and Feedback.
Cox, Kevin C; Greve, Graham P; Weiner, Joshua M; Thompson, James K
2016-03-01
We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N=5×10^{4} laser-cooled ^{87}Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms-comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N=4×10^{5} atoms. This is one of the largest reported entanglement enhancements to date in any system. PMID:26991175
Equilibrium, fluctuation relations and transport for irreversible deterministic dynamics
Colangeli, Matteo
2011-01-01
In a recent paper [M. Colangeli \\textit{et al.}, J.\\ Stat.\\ Mech.\\ P04021, (2011)] it was argued that the Fluctuation Relation for the phase space contraction rate $\\Lambda$ could suitably be extended to non-reversible dissipative systems. We strengthen here those arguments, providing analytical and numerical evidence based on the properties of a simple irreversible nonequilibrium baker model. We also consider the problem of response, showing that the transport coefficients are not affected by the irreversibility of the microscopic dynamics. In addition, we prove that a form of \\textit{detailed balance}, hence of equilibrium, holds in the space of relevant variables, despite the irreversibility of the phase space dynamics. This corroborates the idea that the same stochastic description, which arises from a projection onto a subspace of relevant coordinates, is compatible with quite different underlying deterministic dynamics. In other words, the details of the microscopic dynamics are largely irrelevant, for ...
Plasmonic nanogap arrays for deterministic sensor performance by EUV lithography
International Nuclear Information System (INIS)
Full text: Plasmonic nanogap arrays have been fabricated by EUV lithography to explore their use as sensing substrates for surface enhanced Raman scattering (SERS). SERS is a well-known effect offering the unique molecule signature of Raman spectroscopy with strongly enhanced signal strengths for detection as low as nanomolar concentrations. Because the signal enhancement often has its origin from randomly distributed hotspots, SERS substrates to-date lack reproducibility over large areas. Our fabrication process of nanogap arrays is found to lead to superior reproducibility with standard deviations well below 5 %. This evolves from the high density of well controlled nanogaps leading to a deterministic origin of SERS. Optimization procedures of fabrication will be presented, and we will discuss the obtained correlation between experiments and simulation enabling an accurate prediction of the sensor performance. (author)
Nucleation theory beyond the deterministic limit. I. The nucleation stage.
Dubrovskii, V G; Nazarenko, M V
2010-03-21
This work addresses theory of nucleation and condensation based on the continuous Fokker-Plank type kinetic equation for the distribution of supercritical embryos over sizes beyond the deterministic limit, i.e., keeping the second derivative with respect to size. The first part of the work treats the nucleation stage. It is shown that the size spectrum should be generally obtained by the convolution of the initial distribution with the Gaussian-like Green function with spreading dispersion. It is then demonstrated that the fluctuation-induced effects can be safely neglected at the nucleation stage, where the spectrum broadening due to the nonlinear boundary condition is much larger than the fluctuational one. The crossover between the known triangular and double exponential distributions under different conditions of material influx into the system is demonstrated. Some examples of size distributions at the nucleation stage in different regimes of material influx are also presented. PMID:20331305
Deterministic calculations of radiation doses from brachytherapy seeds
International Nuclear Information System (INIS)
Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides (192Ir, 198Au, 137Cs and 60Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)
Energy Technology Data Exchange (ETDEWEB)
Monache, L D; Grell, G A; McKeen, S; Wilczak, J; Pagowski, M O; Peckham, S; Stull, R; McHenry, J; McQueen, J
2006-03-20
Kalman filtering (KF) is used to postprocess numerical-model output to estimate systematic errors in surface ozone forecasts. It is implemented with a recursive algorithm that updates its estimate of future ozone-concentration bias by using past forecasts and observations. KF performance is tested for three types of ozone forecasts: deterministic, ensemble-averaged, and probabilistic forecasts. Eight photochemical models were run for 56 days during summer 2004 over northeastern USA and southern Canada as part of the International Consortium for Atmospheric Research on Transport and Transformation New England Air Quality (AQ) Study. The raw and KF-corrected predictions are compared with ozone measurements from the Aerometric Information Retrieval Now data set, which includes roughly 360 surface stations. The completeness of the data set allowed a thorough sensitivity test of key KF parameters. It is found that the KF improves forecasts of ozone-concentration magnitude and the ability to predict rare events, both for deterministic and ensemble-averaged forecasts. It also improves the ability to predict the daily maximum ozone concentration, and reduces the time lag between the forecast and observed maxima. For this case study, KF considerably improves the predictive skill of probabilistic forecasts of ozone concentration greater than thresholds of 10 to 50 ppbv, but it degrades it for thresholds of 70 to 90 ppbv. Moreover, KF considerably reduces probabilistic forecast bias. The significance of KF postprocessing and ensemble-averaging is that they are both effective for real-time AQ forecasting. KF reduces systematic errors, whereas ensemble-averaging reduces random errors. When combined they produce the best overall forecast.
Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry
International Nuclear Information System (INIS)
Monte Carlo technique is that the solutions are given at specific locations only, are statistically fluctuating and are arrived at with lots of computer effort. Sooner rather than later, however, one would expect that powerful variance reductions and ever-faster processor machines would balance these disadvantages out. This is especially true if one considers the rapid advances in computer technology and parallel computers, which can achieve a 300, fold faster convergence. In many fields and cases the user would, however, benefit greatly by considering when possible alternative methods to the Monte Carlo technique, such as deterministic methods, at least as a way of validation. It can be shown in fact, that for less complex problems a deterministic approach can have many advantages. In its earlier manifestations, Monte Carlo simulation was primarily performed by experts who were intimately involved in the development of the computer code. Increasingly, however, codes are being supplied as relatively user-friendly packages for widespread use, which allows them to be used by those with less specialist knowledge. This enables them to be used as 'black boxes', which in turn provides scope for costly errors, especially in the choice of cross section data and accelerator techniques. The Monte Carlo method as employed with modem computers goes back several decades, and nowadays science and software libraries would be virtually empty if one excluded work that is either directly or indirectly related to this technique. This is specifically true in the fields of 'computational dosimetry', 'radiation protection' and radiation transport in general. Hundreds of codes have been written and applied with various degrees of success. Some of these have become trademarks, generally well supported and taken over by the thousands of users. Other codes, which should be encouraged, are the so-called in house codes, which still serve well their developers' and their groups' in their intended
Matching allele dynamics and coevolution in a minimal predator-prey replicator model
Energy Technology Data Exchange (ETDEWEB)
Sardanyes, Josep [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain)], E-mail: josep.sardanes@upf.edu; Sole, Ricard V. [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain); Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)
2008-01-21
A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.
Matching allele dynamics and coevolution in a minimal predator-prey replicator model
International Nuclear Information System (INIS)
A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations
Matching allele dynamics and coevolution in a minimal predator prey replicator model
Sardanyés, Josep; Solé, Ricard V.
2008-01-01
A minimal Lotka Volterra type predator prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.
International Nuclear Information System (INIS)
This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.
Minimally invasive mediastinal surgery.
Melfi, Franca M A; Fanucchi, Olivia; Mussi, Alfredo
2016-01-01
In the past, mediastinal surgery was associated with the necessity of a maximum exposure, which was accomplished through various approaches. In the early 1990s, many surgical fields, including thoracic surgery, observed the development of minimally invasive techniques. These included video-assisted thoracic surgery (VATS), which confers clear advantages over an open approach, such as less trauma, short hospital stay, increased cosmetic results and preservation of lung function. However, VATS is associated with several disadvantages. For this reason, it is not routinely performed for resection of mediastinal mass lesions, especially those located in the anterior mediastinum, a tiny and remote space that contains vital structures at risk of injury. Robotic systems can overcome the limits of VATS, offering three-dimensional (3D) vision and wristed instrumentations, and are being increasingly used. With regards to thymectomy for myasthenia gravis (MG), unilateral and bilateral VATS approaches have demonstrated good long-term neurologic results with low complication rates. Nevertheless, some authors still advocate the necessity of maximum exposure, especially when considering the distribution of normal and ectopic thymic tissue. In recent studies, the robotic approach has shown to provide similar neurological outcomes when compared to transsternal and VATS approaches, and is associated with a low morbidity. Importantly, through a unilateral robotic technique, it is possible to dissect and remove at least the same amount of mediastinal fat tissue. Preliminary results on early-stage thymomatous disease indicated that minimally invasive approaches are safe and feasible, with a low rate of pleural recurrence, underlining the necessity of a "no-touch" technique. However, especially for thymomatous disease characterized by an indolent nature, further studies with long follow-up period are necessary in order to assess oncologic and neurologic results through minimally invasive
Minimal noncanonical cosmologies
International Nuclear Information System (INIS)
We demonstrate how much it is possible to deviate from the standard cosmological paradigm of inflation-assisted ΛCDM, keeping within current observational constraints, and without adding to or modifying any theoretical assumptions. We show that within a minimal framework there are many new possibilities, some of them wildly different from the standard picture. We present three illustrative examples of new models, described phenomenologically by a noncanonical scalar field coupled to radiation and matter. These models have interesting implications for inflation, quintessence, reheating, electroweak baryogenesis, and the relic densities of WIMPs and other exotics
Using EFDD as a Robust Technique for Deterministic Excitation in Operational Modal Analysis
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2007-01-01
experiments were carried out on a plate structure excited by respectively a pure stochastic signal and the same stochastic signal superimposed by a deterministic signal. Good agreement was found in terms of both natural frequencies, damping ratios and mode shapes. Even the influence of a deterministic signal...
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Maity, Debaprasad
2016-01-01
In this paper we propose two simple minimal Higgs inflation scenarios through a simple modification of the Higgs potential, as opposed to the usual non-minimal Higgs-gravity coupling prescription. The modification is done in such a way that it creates a flat plateau for a huge range of field values at the inflationary energy scale $\\mu \\simeq (\\lambda)^{1/4} \\alpha$. Assuming the perturbative Higgs quartic coupling, $\\lambda \\simeq {\\cal O}(1)$, for both the models inflation energy scale turned out to be $\\mu \\simeq (10^{14}, 10^{15})$ GeV, and prediction of all the cosmologically relevant quantities, $(n_s,r,dn_s^k)$, fit extremely well with observations made by PLANCK. Considering observed central value of the scalar spectral index, $n_s= 0.968$, our two models predict efolding number, $N = (52,47)$. Within a wide range of viable parameter space, we found that the prediction of tensor to scalar ratio $r (\\leq 10^{-5})$ is far below the current experimental sensitivity to be observed in the near future. The ...
On Time with Minimal Expected Cost!
DEFF Research Database (Denmark)
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand;
2014-01-01
) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...
Minimal projections with respect to various norms
Aksoy, Asuman Guven; Lewicki, Grzegorz
2010-01-01
We will show that a theorem of Rudin \\cite{wr1}, \\cite{wr}, permits us to determine minimal projections not only with respect to the operator norm but with respect to quasi-norms in operators ideals and numerical radius in many concrete cases.
Holographic dark energy from minimal supergravity
Landim, Ricardo C. G.
2015-01-01
We embed models of holographic dark energy coupled to dark matter in minimal supergravity plus matter, with one chiral superfield. We analyze two cases. The first one has the Hubble radius as the infrared cutoff and the interaction between the two fluids is proportional to the energy density of the dark energy. The second case has the future event horizon as infrared cutoff while the interaction is proportional to the energy density of both components of the dark sector.
Linear Superposition of Minimal Surfaces: Generalized Helicoids and Minimal Cones
Hoppe, Jens
2016-01-01
Observing a linear superposition principle, a family of new minimal hypersurfaces in Euclidean space is found, as well as that linear combinations of generalized helicoids induce new algebraic minimal cones of arbitrarily high degree.
Aspects of cell calculations in deterministic reactor core analysis
Energy Technology Data Exchange (ETDEWEB)
Varvayanni, M. [NCSR ' DEMOKRITOS' , PoB 60228, 15310 Aghia Paraskevi (Greece); Savva, P., E-mail: savvapan@ipta.demokritos.gr [NCSR ' DEMOKRITOS' , PoB 60228, 15310 Aghia Paraskevi (Greece); Catsaros, N. [NCSR ' DEMOKRITOS' , PoB 60228, 15310 Aghia Paraskevi (Greece)
2011-02-15
{Tau}he capability of achieving optimum utilization of the deterministic neutronic codes is very important, since, although elaborate tools, they are still widely used for nuclear reactor core analyses, due to specific advantages that they present compared to Monte Carlo codes. The user of a deterministic neutronic code system has to make some significant physical assumptions if correct results are to be obtained. A decisive first step at which such assumptions are required is the one-dimensional cell calculations, which provide the neutronic properties of the homogenized core cells and collapse the cross sections into user-defined energy groups. One of the most crucial determinations required at the above stage and significantly influencing the subsequent three-dimensional calculations of reactivity, concerns the transverse leakages, associated to each one-dimensional, user-defined core cell. For the appropriate definition of the transverse leakages several parameters concerning the core configuration must be taken into account. Moreover, the suitability of the assumptions made for the transverse cell leakages, depends on earlier user decisions, such as those made for the core partition into homogeneous cells. In the present work, the sensitivity of the calculated core reactivity to the determined leakages of the individual cells constituting the core, is studied. Moreover, appropriate assumptions concerning the transverse leakages in the one-dimensional cell calculations are searched out. The study is performed examining also the influence of the core size and the reflector existence, while the effect of the decisions made for the core partition into homogenous cells is investigated. In addition, the effect of broadened moderator channels formed within the core (e.g. by removing fuel plates to create space for control rod hosting) is also examined. Since the study required a large number of conceptual core configurations, experimental data could not be available
Learn with SAT to Minimize Büchi Automata
Directory of Open Access Journals (Sweden)
Stephan Barth
2012-10-01
Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.
Katzourakis, Nikos; Pryer, Tristan
2014-01-01
We consider the problem of optimally controlling a system of either ODEs or SDEs with respect to a vector-valued cost functional. Optimisation of the cost is considered with respect to a partial ordering generated by a given proper cone $K$. Since in the vector case minima may not exist, we define vectorial value functions as (Pareto) minimals of the ordering. The main results of this paper are that we demostrate existence of value maps of the system as minimals of the ordering contained in t...
International Nuclear Information System (INIS)
The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 106 - 107 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed
International Nuclear Information System (INIS)
Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for βeff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (βeff) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions βeff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed
Deterministic transport of particles in a micro-pump
Beltrame, Philippe; Hänggi, Peter
2012-01-01
We study the drift of suspended micro-particles in a viscous liquid pumped back and forth through a periodic lattice of pores (drift ratchet). In order to explain the particle drift observed in such an experiment, we present an one-dimensional deterministic model of Stokes' drag. We show that the stability of oscillations of particle is related to their amplitude. Under appropriate conditions, particles may drift and two mechanisms of transport are pointed out. The first one is due to an spatio-temporal synchronization between the fluid and particle motions. As results the velocity is locked by the ratio of the space periodicity over the time periodicity. The direction of the transport may switch by tuning the parameters. Noteworthy, its emergence is related to a lattice of 2-periodic orbits but not necessary to chaotic dynamics. The second mechanism is due to an intermittent bifurcation and leads to a slow transport composed by long time oscillations following by a relative short transport to the next pore. ...
A deterministic seismic hazard map of India and adjacent areas
International Nuclear Information System (INIS)
A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)
Comparison between Monte Carlo method and deterministic method
International Nuclear Information System (INIS)
A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.)
Particle separation using virtual deterministic lateral displacement (vDLD).
Collins, David J; Alan, Tuncay; Neild, Adrian
2014-05-01
We present a method for sensitive and tunable particle sorting that we term virtual deterministic lateral displacement (vDLD). The vDLD system is composed of a set of interdigital transducers (IDTs) within a microfluidic chamber that produce a force field at an angle to the flow direction. Particles above a critical diameter, a function of the force induced by viscous drag and the force field, are displaced laterally along the minimum force potential lines, while smaller particles continue in the direction of the fluid flow without substantial perturbations. We demonstrate the effective separation of particles in a continuous-flow system with size sensitivity comparable or better than other previously reported microfluidic separation techniques. Separation of 5.0 μm from 6.6 μm, 6.6 μm from 7.0 μm and 300 nm from 500 nm particles are all achieved using the same device architecture. With the high sensitivity and flexibility vDLD affords we expect to find application in a wide variety of microfluidic platforms. PMID:24638896
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Deterministic lateral displacement for particle separation: a review.
McGrath, J; Jimenez, M; Bridle, H
2014-11-01
Deterministic lateral displacement (DLD), a hydrodynamic, microfluidic technology, was first reported by Huang et al. in 2004 to separate particles on the basis of size in continuous flow with a resolution of down to 10 nm. For 10 years, DLD has been extensively studied, employed and modified by researchers in terms of theory, design, microfabrication and application to develop newer, faster and more efficient tools for separation of millimetre, micrometre and even sub-micrometre sized particles. To extend the range of potential applications, the specific arrangement of geometric features in DLD has also been adapted and/or coupled with external forces (e.g. acoustic, electric, gravitational) to separate particles on the basis of other properties than size such as the shape, deformability and dielectric properties of particles. Furthermore, investigations into DLD performance where inertial and non-Newtonian effects are present have been conducted. However, the evolvement and application of DLD has not yet been reviewed. In this paper, we collate many interesting publications to provide a comprehensive review of the development and diversity of this technology but also provide scope for future direction and detail the fundamentals for those wishing to design such devices for the first time. PMID:25212386
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications. PMID:21599256
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Hybrid deterministic - stochastic model for forecasting of monthly river flows
Svetlíková, D.; Szolgay, J.; Kohnová, S.; Komorníková, M.; Szökeová, D.
2009-04-01
Flows of the Váh River and its tributaries in the Tatry alpine mountain region in Slovakia are predominantly fed by snowmelt during the spring period and convective precipitation in the summer. Therefore their regime properties exhibit clear seasonal patterns. Moreover left and right side tributaries of the Váh River spring in different physiographic conditions in the High and Low Tatry Mountains. This provides intuitive justification for the application of nonlinear two-regime models for modelling and forecasting of monthly time series of these rivers. In the poster the forecasting performance of several linear and nonlinear time series models is compared with respect to their capabilities of forecasting monthly flows into the Liptovská Mara reservoir. ARMA and SETAR regime switching models were identified for each tributary respectively and forecasts of the tributary flows were composed through a simple water balance model into the forecast of the overall reservoir inflow. The combined hybrid (deterministic-stochastic) forecast, which preserves both the specific regime of the tributaries and the water balance in the catchments, was compared against different forecasts set up for the overall reservoir inflow.
A Modified Deterministic Model for Reverse Supply Chain in Manufacturing
Directory of Open Access Journals (Sweden)
R. N. Mahapatra
2013-01-01
Full Text Available Technology is becoming pervasive across all facets of our lives today. Technology innovation leading to development of new products and enhancement of features in existing products is happening at a faster pace than ever. It is becoming difficult for the customers to keep up with the deluge of new technology. This trend has resulted in gross increase in use of new materials and decreased customers' interest in relatively older products. This paper deals with a novel model in which the stationary demand is fulfilled by remanufactured products along with newly manufactured products. The current model is based on the assumption that the returned items from the customers can be remanufactured at a fixed rate. The remanufactured products are assumed to be as good as the new ones in terms of features, quality, and worth. A methodology is used for the calculation of optimum level for the newly manufactured items and the optimum level of the remanufactured products simultaneously. The model is formulated depending on the relationship between different parameters. An interpretive-modelling-based approach has been employed to model the reverse logistics variables typically found in supply chains (SCs. For simplicity of calculation a deterministic approach is implemented for the proposed model.