OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.
2015-12-01
Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.
Janská, Veronika; Jiménez-Alfaro, Borja; Chytrý, Milan; Divíšek, Jan; Anenkhonov, Oleg; Korolyuk, Andrey; Lashchinskyi, Nikolai; Culek, Martin
2017-03-01
We modelled the European distribution of vegetation types at the Last Glacial Maximum (LGM) using present-day data from Siberia, a region hypothesized to be a modern analogue of European glacial climate. Distribution models were calibrated with current climate using 6274 vegetation-plot records surveyed in Siberia. Out of 22 initially used vegetation types, good or moderately good models in terms of statistical validation and expert-based evaluation were computed for 18 types, which were then projected to European climate at the LGM. The resulting distributions were generally consistent with reconstructions based on pollen records and dynamic vegetation models. Spatial predictions were most reliable for steppe, forest-steppe, taiga, tundra, fens and bogs in eastern and central Europe, which had LGM climate more similar to present-day Siberia. The models for western and southern Europe, regions with a lower degree of climatic analogy, were only reliable for mires and steppe vegetation, respectively. Modelling LGM vegetation types for the wetter and warmer regions of Europe would therefore require gathering calibration data from outside Siberia. Our approach adds value to the reconstruction of vegetation at the LGM, which is limited by scarcity of pollen and macrofossil data, suggesting where specific habitats could have occurred. Despite the uncertainties of climatic extrapolations and the difficulty of validating the projections for vegetation types, the integration of palaeodistribution modelling with other approaches has a great potential for improving our understanding of biodiversity patterns during the LGM.
Liang Xu
2015-01-01
Full Text Available Aiming to reduce the number of crashes caused by speeding at night on road section with a crosswalk, a study was conducted on the maximum speed limit and safe average luminance at night. In order to investigate the potential relationship between drivers’ recognitive characteristics and driving speed under different road lighting features, data of remaining driving time (period from the time that crossing pedestrian is recognized to the time that vehicle arrives at crosswalk with an uniform speed were recorded. The results of the data analysis show that it is more difficult for divers to recognize crossing pedestrian at night when a single pedestrian is statistic and wears dark clothes. The remaining driving time decreases with the increase of driving speed and the decrease of road luminance. With the collected data, several multivariate nonlinear regression models were established to capture the relationship among the variables of remaining driving time at night, the driving speed, and the average luminance. Then the modeling results were used to develop the reasonable speed limit and safe average luminance by physical equations. The case studies are also introduced at the end of the paper.
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W
2016-01-01
The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Maximum Likelihood Position Location with a Limited Number of References
D. Munoz-Rodriguez
2011-04-01
Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Concerning the maximum frequency limits of Gunn operators
R; F; Macpherson; G; M; Dunn; Ata; Khalid; D; R; S; Cumming
2015-01-01
The length of the transit region of a Gunn diode determines the natural frequency at which it operates in fundamental mode-the shorter the device,the higher the frequency of operation.The long-held view on Gunn diode design is that for a functioning device the minimum length of the transit region is about 1.5μm,limiting the devices to fundamental mode operation at frequencies of roughly 60 GHz.The authors posit that this theoretical restriction is a consequence of limits of the hydrodynamic models by which it was determined.Study of these devices by more advanced Monte Carlo techniques,which simulate the ballistic transport and electron-phonon interactions that govern device behaviour,offers a new lower bound of 0.5μm,which is already being approached by the experimental evidence shown in planar and vertical devices exhibiting Gunn operation at 0.6μm and 0.7μm.It is shown that the limits for Gunn domain operation are determined by the device length required for the transferred electron effect to occur(approximately 0.15μm,which as demonstrated is largely field independent)and the fundamental size of the domain(approximately 0.3μm).At this new length,operation in fundamental mode at much higher frequencies becomes possible-the Monte Carlo model used predicts power output at frequencies over 300 GHz.
Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna
2014-01-01
High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
75 FR 76482 - Federal Housing Administration (FHA): FHA Maximum Loan Limits for 2011
2010-12-08
... URBAN DEVELOPMENT Federal Housing Administration (FHA): FHA Maximum Loan Limits for 2011 AGENCY: Office...: This notice announces that FHA has posted on its Web site the single-family maximum loan limits for 2011. The loan limit limits can be found at...
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
Maximum Growth Potential and Periods of Resource Limitation in Apple Tree.
Reyes, Francesco; DeJong, Theodore; Franceschi, Pietro; Tagliavini, Massimo; Gianelle, Damiano
2016-01-01
Knowledge of seasonal maximum potential growth rates are important for assessing periods of resource limitations in fruit tree species. In this study we assessed the periods of resource limitation for vegetative (current year stems, and woody biomass) and reproductive (fruit) organs of a major agricultural crop: the apple tree. This was done by comparing relative growth rates (RGRs) of individual organs in trees with reduced competition for resources to trees grown under standard field conditions. Special attention was dedicated to disentangling patterns and values of maximum potential growth for each organ type. The period of resource limitation for vegetative growth was much longer than in another fruit tree species (peach): from late May until harvest. Two periods of resource limitation were highlighted for fruit: from the beginning of the season until mid-June, and about 1 month prior to harvest. By investigating the variability in individual organs growth we identified substantial differences in RGRs among different shoot categories (proleptic and epicormic) and within each group of monitored organs. Qualitatively different and more accurate values of growth rates for vegetative organs, compared to the use of the simple compartmental means, were estimated. Detailed, source-sink based tree growth models, commonly in need of fine parameter tuning, are expected to benefit from the results produced by these analyses.
5 CFR 630.302 - Maximum annual leave accumulation-forty-five day limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum annual leave accumulation-forty-five day limitation. 630.302 Section 630.302 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Annual Leave § 630.302 Maximum annual leave...
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
Dhara, Chirag; Kleidon, Axel
2015-01-01
Convective and radiative cooling are the two principle mechanisms by which the Earth's surface transfers heat into the atmosphere and that shape surface temperature. However, this partitioning is not sufficiently constrained by energy and mass balances alone. We use a simple energy balance model in which convective fluxes and surface temperatures are determined with the additional thermodynamic limit of maximum convective power. We then show that the broad geographic variation of heat fluxes and surface temperatures in the climatological mean compare very well with the ERA-Interim reanalysis over land and ocean. We also show that the estimates depend considerably on the formulation of longwave radiative transfer and that a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates.
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
Köcher, S. S.; Heydenreich, T.; Zhang, Y.; Reddy, G. N. M.; Caldarelli, S.; Yuan, H.; Glaser, S. J.
2016-04-01
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoretically predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.
Kim, K. M.; Smetana, P.
1990-03-01
Growth of large diameter Czochralski (CZ) silicon crystals require complete elimination of dislocations by means of Dash technique, where the seed diameter is reduced to a small size typically 3 mm in conjunction with increase in the pull rate. The maximum length of the large CZ silicon is estimated at the fracture stress limit of the seed neck diameter ( d). The maximum lengths for 200 and 300 mm CZ crystals amount to 197 and 87 cm, respectively, with d = 0.3 cm; the estimated maximum weight is 144 kg.
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Maximum caliber inference and the stochastic Ising model
Cafaro, Carlo; Ali, Sean Alan
2016-11-01
We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.
Rannama, Indrek; Port, Kristjan; Bazanov, Boriss
2012-01-01
Maximum gears for youth category riders are limited. As a result, youth category riders are regularly compelled to ride in a high cadence regime. The aim of this study was to investigate if regular work at high cadence regime due to limited transmission in youth category riders reflects in effectual cadence at the point of maximal power generation during the 10 second sprint effort. 24 junior and youth national team cyclist’s average maximal peak power at various cadence regimes was registere...
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power
Yang, Yongheng; Wang, Huai; Blaabjerg, Frede
2014-01-01
. The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...... of the power devices (e.g. IGBTs) used in PV inverters with the CPG control under different feed-in power limits. A long-term mission profile (i.e. solar irradiance and ambient temperature) based stress analysis approach is extended and applied to obtain the yearly electrical and thermal stresses of the power...
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Thermospheric density model biases at the 23rd sunspot maximum
Pardini, C.; Moe, K.; Anselmo, L.
2012-07-01
Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
Iammarino, Marco; Di Taranto, Aurelia; Muscarella, Marilena
2012-02-01
Sulphiting agents are commonly used food additives. They are not allowed in fresh meat preparations. In this work, 2250 fresh meat samples were analysed to establish the maximum concentration of sulphites that can be considered as "natural" and therefore be admitted in fresh meat preparations. The analyses were carried out by an optimised Monier-Williams Method and the positive samples confirmed by ion chromatography. Sulphite concentrations higher than the screening method LOQ (10.0 mg · kg(-1)) were found in 100 samples. Concentrations higher than 76.6 mg · kg(-1), attributable to sulphiting agent addition, were registered in 40 samples. Concentrations lower than 41.3 mg · kg(-1) were registered in 60 samples. Taking into account the distribution of sulphite concentrations obtained, it is plausible to estimate a maximum allowable limit of 40.0 mg · kg(-1) (expressed as SO(2)). Below this value the samples can be considered as "compliant".
Incorporating Linguistic Structure into Maximum Entropy Language Models
FANG GaoLin(方高林); GAO Wen(高文); WANG ZhaoQi(王兆其)
2003-01-01
In statistical language models, how to integrate diverse linguistic knowledge in a general framework for long-distance dependencies is a challenging issue. In this paper, an improved language model incorporating linguistic structure into maximum entropy framework is presented.The proposed model combines trigram with the structure knowledge of base phrase in which trigram is used to capture the local relation between words, while the structure knowledge of base phrase is considered to represent the long-distance relations between syntactical structures. The knowledge of syntax, semantics and vocabulary is integrated into the maximum entropy framework.Experimental results show that the proposed model improves by 24% for language model perplexity and increases about 3% for sign language recognition rate compared with the trigram model.
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Scaling of wingbeat frequency with body mass in bats and limits to maximum bat size.
Norberg, Ulla M Lindhe; Norberg, R Åke
2012-03-01
The ability to fly opens up ecological opportunities but flight mechanics and muscle energetics impose constraints, one of which is that the maximum body size must be kept below a rather low limit. The muscle power available for flight increases in proportion to flight muscle mass and wingbeat frequency. The maximum wingbeat frequency attainable among increasingly large animals decreases faster than the minimum frequency required, so eventually they coincide, thereby defining the maximum body mass at which the available power just matches up to the power required for sustained aerobic flight. Here, we report new wingbeat frequency data for 27 morphologically diverse bat species representing nine families, and additional data from the literature for another 38 species, together spanning a range from 2.0 to 870 g. For these species, wingbeat frequency decreases with increasing body mass as M(b)(-0.26). We filmed 25 of our 27 species in free flight outdoors, and for these the wingbeat frequency varies as M(b)(-0.30). These exponents are strikingly similar to the body mass dependency M(b)(-0.27) among birds, but the wingbeat frequency is higher in birds than in bats for any given body mass. The downstroke muscle mass is also a larger proportion of the body mass in birds. We applied these empirically based scaling functions for wingbeat frequency in bats to biomechanical theories about how the power required for flight and the power available converge as animal size increases. To this end we estimated the muscle mass-specific power required for the largest flying extant bird (12-16 kg) and assumed that the largest potential bat would exert similar muscle mass-specific power. Given the observed scaling of wingbeat frequency and the proportion of the body mass that is made up by flight muscles in birds and bats, we estimated the maximum potential body mass for bats to be 1.1-2.3 kg. The largest bats, extinct or extant, weigh 1.6 kg. This is within the range expected if it
Resolution of overlapping ambiguity strings based on maximum entropy model
ZHANG Feng; FAN Xiao-zhong
2006-01-01
The resolution of overlapping ambiguity strings (OAS) is studied based on the maximum entropy model.There are two model outputs,where either the first two characters form a word or the last two characters form a word.The features of the model include one word in context of OAS,the current OAS and word probability relation of two kinds of segmentation results.OAS in training text is found by the combination of the FMM and BMM segmentation method.After feature tagging they are used to train the maximum entropy model.The People Daily corpus of January 1998 is used in training and testing.Experimental results show a closed test precision of 98.64% and an open test precision of 95.01%.The open test precision is 3,76% better compared with that of the precision of common word probability method.
Maximum-entropy principle as Galerkin modelling paradigm
Noack, Bernd R.; Niven, Robert K.; Rowley, Clarence W.
2012-11-01
We show how the empirical Galerkin method, leading e.g. to POD models, can be derived from maximum-entropy principles building on Noack & Niven 2012 JFM. In particular, principles are proposed (1) for the Galerkin expansion, (2) for the Galerkin system identification, and (3) for the probability distribution of the attractor. Examples will illustrate the advantages of the entropic modelling paradigm. Partially supported by the ANR Chair of Excellence TUCOROM and an ADFA/UNSW Visiting Fellowship.
A New Detection Approach Based on the Maximum Entropy Model
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
Stimulus-dependent maximum entropy models of neural population codes.
Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Stimulus-dependent maximum entropy models of neural population codes.
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A
2009-06-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Promoter recognition based on the maximum entropy hidden Markov model.
Zhao, Xiao-yu; Zhang, Jin; Chen, Yuan-yuan; Li, Qiang; Yang, Tao; Pian, Cong; Zhang, Liang-yun
2014-08-01
Since the fast development of genome sequencing has produced large scale data, the current work uses the bioinformatics methods to recognize different gene regions, such as exon, intron and promoter, which play an important role in gene regulations. In this paper, we introduce a new method based on the maximum entropy Markov model (MEMM) to recognize the promoter, which utilizes the biological features of the promoter for the condition. However, it leads to a high false positive rate (FPR). In order to reduce the FPR, we provide another new method based on the maximum entropy hidden Markov model (ME-HMM) without the independence assumption, which could also accommodate the biological features effectively. To demonstrate the precision, the new methods are implemented by R language and the hidden Markov model (HMM) is introduced for comparison. The experimental results show that the new methods may not only overcome the shortcomings of HMM, but also have their own advantages. The results indicate that, MEMM is excellent for identifying the conserved signals, and ME-HMM can demonstrably improve the true positive rate.
Marczak Monika
2015-09-01
Full Text Available The aim of this study was to determine the correlation between lipophilicity and maximum residue limit (MRL value specified for veterinary drugs in the fatty tissue of various animal species. The analysis was performed on a group of 73 compounds with different modes of action and MRL values determined for the fatty tissue of animals. Additionally, the logarithm of water/organic phase partition ratio (LogP and the ratio of ionised and unionised substance in buffer with pH 7.4 (LogD7.4 were calculated. The main analysis was performed after the division of the whole group into six fractions. The linear correlation and regression analysis were used to determine the indirect relationship between the mean arithmetic value of LogP or LogD7.4 in selected fractions and related LogMRL of the drugs tested. The calculations revealed a linear correlation between fractioned lipophilicity and LogMRL values for the analysed compounds. The existence of indirect relationship between lipophilicity and MRL values determined for fatty tissue was confirmed.
Maximum Parsimony and the Skewness Test: A Simulation Study of the Limits of Applicability
Määttä, Jussi; Roos, Teemu
2016-01-01
The maximum parsimony (MP) method for inferring phylogenies is widely used, but little is known about its limitations in non-asymptotic situations. This study employs large-scale computations with simulated phylogenetic data to estimate the probability that MP succeeds in finding the true phylogeny for up to twelve taxa and 256 characters. The set of candidate phylogenies are taken to be unrooted binary trees; for each simulated data set, the tree lengths of all (2n − 5)!! candidates are computed to evaluate quantities related to the performance of MP, such as the probability of finding the true phylogeny, the probability that the tree with the shortest length is unique, the probability that the true phylogeny has the shortest tree length, and the expected inverse of the number of trees sharing the shortest length. The tree length distributions are also used to evaluate and extend the skewness test of Hillis for distinguishing between random and phylogenetic data. The results indicate, for example, that the critical point after which MP achieves a success probability of at least 0.9 is roughly around 128 characters. The skewness test is found to perform well on simulated data and the study extends its scope to up to twelve taxa. PMID:27035667
Pesticide food safety standards as companions to tolerances and maximum residue limits
Carl K Winter; Elizabeth A Jara
2015-01-01
Alowable levels for pesticide residues in foods, known as tolerances in the US and as maximum residue limits (MRLs) in much of the world, are widely yet inappropriately perceived as levels of safety concern. A novel approach to develop scientiifcaly defensible levels of safety concern is presented and an example to determine acute and chronic pesticide food safety standard (PFSS) levels for the fungicide captan on strawberries is provided. Using this approach, the chronic PFSS level for captan on strawberries was determined to be 2000 mg kg–1 and the acute PFSS level was determined to be 250 mg kg–1. Both levels are far above the existing tolerance and MRLs that commonly range from 3 to 20 mg kg–1, and provide evidence that captan residues detected at levels greater than the tolerance or MRLs are not of acute or chronic health concern even though they represent violative residues. The beneifts of developing the PFSS approach to serve as a companion to existing tolerances/MRLs include a greater understanding concerning the health signiifcance, if any, from exposure to violative pesticide residues. In addition, the PFSS approach can be universaly applied to al potential pesticide residues on al food commodities, can be modiifed by speciifc jurisdictions to take into account differences in food consumption practices, and can help prioritize food residue monitoring by identifying the pesticide/commodity combinations of the greatest potential food safety concern and guiding development of ifeld level analytical methods to detect pesticide residues on prioritized pesticide/commodity combinations.
Inferential permutation tests for maximum entropy models in ecology.
Shipley, Bill
2010-09-01
Maximum entropy (maxent) models assign probabilities to states that (1) agree with measured macroscopic constraints on attributes of the states and (2) are otherwise maximally uninformative and are thus as close as possible to a specified prior distribution. Such models have recently become popular in ecology, but classical inferential statistical tests require assumptions of independence during the allocation of entities to states that are rarely fulfilled in ecology. This paper describes a new permutation test for such maxent models that is appropriate for very general prior distributions and for cases in which many states have zero abundance and that can be used to test for conditional relevance of subsets of constraints. Simulations show that the test gives correct probability estimates under the null hypothesis. Power under the alternative hypothesis depends primarily on the number and strength of the constraints and on the number of states in the model; the number of empty states has only a small effect on power. The test is illustrated using two empirical data sets to test the community assembly model of B. Shipley, D. Vile, and E. Garnier and the species abundance distribution models of S. Pueyo, F. He, and T. Zillio.
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen; Hui He; Jun Guo
2008-01-01
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
The Betz-Joukowsky limit for the maximum power coefficient of wind turbines
Okulov, Valery; van Kuik, G.A.M.
2009-01-01
The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...
Modeling Mediterranean ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2010-10-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the last glacial maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions nontrivial. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of the salinity in the Mediterranean in spite of reduced net evaporation.
Modeling Mediterranean Ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2011-03-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Joint modelling of annual maximum drought severity and corresponding duration
Tosunoglu, Fatih; Kisi, Ozgur
2016-12-01
In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.
Approximate Maximum Likelihood Commercial Bank Loan Management Model
Godwin N.O. Asemota
2009-01-01
Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.
Zile, M R; Izzi, G; Gaasch, W H
1991-02-01
We tested the hypothesis that maximum systolic elastance (Emax) fails to detect a decline in left ventricular (LV) contractile function when diastolic dysfunction is present. Canine hearts were studied in an isolated blood-perfused heart apparatus (isovolumic LV); contractile dysfunction was produced by 60 or 90 minutes of global ischemia, followed by 90 minutes of reperfusion. Nine normal hearts underwent 60 minutes of ischemia, and five underwent 90 minutes of ischemia. After the ischemia-reperfusion sequence, developed pressure, pressure-volume area, and myocardial ATP level were significantly less than those at baseline in all 14 hearts. In the group undergoing 60 minutes of ischemia, LV diastolic pressure did not increase, whereas Emax decreased from 5.2 +/- 2.5 to 2.9 +/- 1.4 mm Hg/ml (p less than 0.05). In the group undergoing 90 minutes of ischemia, diastolic pressure increased (from 10 +/- 2 to 37 +/- 20 mm Hg, p less than 0.05), and Emax did not change significantly (from 5.1 +/- 4.3 to 4.3 +/- 2.5 mm Hg/ml). A second series of experiments was performed in 13 hearts with pressure-overload hypertrophy (aortic-band model with echocardiography and catheterization studies before the ischemia-reperfusion protocol). Five had evidence for pump failure, whereas eight remained compensated. After 60 minutes of ischemia and 90 minutes of reperfusion, developed pressure, pressure-volume area, and myocardial ATP level were significantly less than those at baseline in all 13 hearts. In the group with compensated LV hypertrophy, LV diastolic pressure did not change, whereas Emax decreased from 6.9 +/- 3.0 to 3.1 +/- 2.3 mm Hg/ml (p less than 0.05).(ABSTRACT TRUNCATED AT 250 WORDS)
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
2012-01-01
We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are ...
Shurtleff, Amy C; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S; Bavari, Sina
2012-12-01
We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.
Jean Patterson
2012-12-01
Full Text Available We describe herein, limitations on research at biosafety level 4 (BSL-4 containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.
Modeling the East Asian climate during the last glacial maximum
ZHAO; Ping(赵平); CHEN; Longxun(陈隆勋); ZHOU; Xiuji(周秀骥); GONG; Yuanfa(巩远发); HAN; Yu(韩余)
2003-01-01
Using the CCM3 global climate model of National Center for AtmosphericResearch(NCAR), this paper comparatively analyzes the characteristics of East Asian monsoon and surface water condition and the expansion of glacier on the Qinghai-Xizang(Tibetan) Plateau(QXP) between the present and the last glacial maximum(LGM). It is found that the winter monsoon is remarkably stronger during the LGM than at present in the north part of China and the western Pacific but varies little in the south part of China. The summer monsoon remarkably weakens inSouth China Sea and the south part of China during the LGM and has no remarkable changes in the north part of China between the present and the LGM. Due to thealternations of the monsoons during the LGM, the annual mean precipitation significantly decreases in the northeast of China and the most part of north China and the Loess Plateau and the eastern QXP, which makes the earth surface lose more water and becomes dry, especially in the eastern QXP and the western Loess Plateau. In some areas of the middle QXP the decrease of evaporation at the earth surface causes soil to become wetter during the LGM than at present, which favorsthe water level of local lakes to rise during the LGM. Additionally, compared to the present, the depth of snow cover increases remarkably on the most part of the QXP during the LGM winter. The analysis of equilibrium line altitude(ELA) of glaciers on the QXP, calculated on the basis of the simulated temperature and precipitation, shows that although a less decrease of air temperature was simulated during the LGM in this paper, the balance between precipitation and air temperature associated with the atmospheric physical processes in the model makes the ELA be 300-900 m lower during the LGM than at present, namely going down fromthe present ELA above 5400 m to 4600-5200 m during the LGM, indicating a unified ice sheet on the QXP during the LGM.
Maximum acceptable inherent buoyancy limit for aircrew/passenger helicopter immersion suit systems.
Brooks, C J
1988-12-01
Helicopter crew and passengers flying over cold water wear immersion suits to provide hypothermic protection in case of ditching in cold water. The suits and linings have trapped air in the material to provide the necessary insulation and are thus very buoyant. By paradox, this buoyancy may be too much for a survivor to overcome in escaping from the cabin of a rapidly sinking inverted helicopter. The Canadian General Standard Board requested that research be conducted to investigate what should be the maximum inherent buoyancy in an immersion suit that would not inhibit escape, yet would provide adequate thermal insulation. This experiment reports on 12 subjects who safely escaped with 146N (33 lbf) of added buoyancy from a helicopter underwater escape trainer. It discusses the logic for and recommendation that the inherent buoyancy in a helicopter crew/passenger immersion suit system should not exceed this figure.
Maximum precision closed-form solution for localizing diffraction-limited spots in noisy images.
Larkin, Joshua D; Cook, Peter R
2012-07-30
Super-resolution techniques like PALM and STORM require accurate localization of single fluorophores detected using a CCD. Popular localization algorithms inefficiently assume each photon registered by a pixel can only come from an area in the specimen corresponding to that pixel (not from neighboring areas), before iteratively (slowly) fitting a Gaussian to pixel intensity; they fail with noisy images. We present an alternative; a probability distribution extending over many pixels is assigned to each photon, and independent distributions are joined to describe emitter location. We compare algorithms, and recommend which serves best under different conditions. At low signal-to-noise ratios, ours is 2-fold more precise than others, and 2 orders of magnitude faster; at high ratios, it closely approximates the maximum likelihood estimate.
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely
Terror birds on the run: a mechanical model to estimate its maximum running speed
Blanco, R. Ernesto; Jones, Washington W
2005-01-01
‘Terror bird’ is a common name for the family Phorusrhacidae. These large terrestrial birds were probably the dominant carnivores on the South American continent from the Middle Palaeocene to the Pliocene–Pleistocene limit. Here we use a mechanical model based on tibiotarsal strength to estimate maximum running speeds of three species of terror birds: Mesembriornis milneedwardsi, Patagornis marshi and a specimen of Phorusrhacinae gen. The model is proved on three living large terrestrial bird species. On the basis of the tibiotarsal strength we propose that Mesembriornis could have used its legs to break long bones and access their marrow. PMID:16096087
Longitudinal Examination of Age-Predicted Symptom-Limited Exercise Maximum Heart Rate
Zhu, Na; Suarez, Jose; Sidney, Steve; Sternfeld, Barbara; Schreiner, Pamela J.; Carnethon, Mercedes R.; Lewis, Cora E.; Crow, Richard S.; Bouchard, Claude; Haskell, William; Jacobs, David R.
2010-01-01
Purpose To estimate the association of age with maximal heart rate (MHR). Methods Data were obtained in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants were black and white men and women aged 18-30 in 1985-86 (year 0). A symptom-limited maximal graded exercise test was completed at years 0, 7, and 20 by 4969, 2583, and 2870 participants, respectively. After exclusion 9622 eligible tests remained. Results In all 9622 tests, estimated MHR (eMHR, beats/minute) had a quadratic relation to age in the age range 18 to 50 years, eMHR=179+0.29*age-0.011*age2. The age-MHR association was approximately linear in the restricted age ranges of consecutive tests. In 2215 people who completed both year 0 and 7 tests (age range 18 to 37), eMHR=189–0.35*age; and in 1574 people who completed both year 7 and 20 tests (age range 25 to 50), eMHR=199–0.63*age. In the lowest baseline BMI quartile, the rate of decline was 0.20 beats/minute/year between years 0-7 and 0.51 beats/minute/year between years 7-20; while in the highest baseline BMI quartile there was a linear rate of decline of approximately 0.7 beats/minute/year over the full age of 18 to 50 years. Conclusion Clinicians making exercise prescriptions should be aware that the loss of symptom-limited MHR is much slower at young adulthood and more pronounced in later adulthood. In particular, MHR loss is very slow in those with lowest BMI below age 40. PMID:20639723
Extracting maximum petrophysical and geological information from a limited reservoir database
Ali, M.; Chawathe, A.; Ouenes, A. [New Mexico Institute of Mining and Technology, Socorro, NM (United States)] [and others
1997-08-01
The characterization of old fields lacking sufficient core and log data is a challenging task. This paper describes a methodology that uses new and conventional tools to build a reliable reservoir model for the Sulimar Queen field. At the fine scale, permeability measured on a fine grid with a minipermeameter was used in conjunction with the petrographic data collected on multiple thin sections. The use of regression analysis and a newly developed fuzzy logic algorithm led to the identification of key petrographic elements which control permeability. At the log scale, old gamma ray logs were first rescaled/calibrated throughout the entire field for consistency and reliability using only four modem logs. Using data from one cored well and the rescaled gamma ray logs, correlations between core porosity, permeability, total water content and gamma ray were developed to complete the small scale characterization. At the reservoir scale, outcrop data and the rescaled gamma logs were used to define the reservoir structure over an area of ten square miles where only 36 wells were available. Given the structure, the rescaled gamma ray logs were used to build the reservoir volume by identifying the flow units and their continuity. Finally, history-matching results constrained to the primary production were used to estimate the dynamic reservoir properties such as relative permeabilities to complete the characterization. The obtained reservoir model was tested by forecasting the waterflood performance and which was in good agreement with the actual performance.
Quasi Maximum Likelihood Analysis of High Dimensional Constrained Factor Models
Li, Kunpeng; Li,Qi; Lu, Lina
2016-01-01
Factor models have been widely used in practice. However, an undesirable feature of a high dimensional factor model is that the model has too many parameters. An effective way to address this issue, proposed in a seminar work by Tsai and Tsay (2010), is to decompose the loadings matrix by a high-dimensional known matrix multiplying with a low-dimensional unknown matrix, which Tsai and Tsay (2010) name constrained factor models. This paper investigates the estimation and inferential theory ...
Pushing desalination recovery to the maximum limit: Membrane and thermal processes integration
Shahzad, Muhammad Wakil
2017-05-05
The economics of seawater desalination processes has been continuously improving as a result of desalination market expansion. Presently, reverse osmosis (RO) processes are leading in global desalination with 53% share followed by thermally driven technologies 33%, but in Gulf Cooperation Council (GCC) countries their shares are 42% and 56% respectively due to severe feed water quality. In RO processes, intake, pretreatment and brine disposal cost 25% of total desalination cost at 30–35% recovery. We proposed a tri-hybrid system to enhance overall recovery up to 81%. The conditioned brine leaving from RO processes supplied to proposed multi-evaporator adsorption cycle driven by low temperature industrial waste heat sources or solar energy. RO membrane simulation has been performed using WinFlow and IMSDesign commercial softwares developed by GE and Nitto. Detailed mathematical model of overall system is developed and simulation has been conducted in FORTRAN. The final brine reject concentration from tri-hybrid cycle can vary from 166,000ppm to 222,000ppm if RO retentate concentration varies from 45,000ppm to 60,000ppm. We also conducted economic analysis and showed that the proposed tri-hybrid cycle can achieve highest recovery, 81%, and lowest energy consumption, 1.76kWhelec/m3, for desalination reported in the literature up till now.
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling
Oort, Frans J.; Jak, Suzanne
2016-01-01
Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…
Limits with modeling data and modeling data with limits
Lionello Pogliani
2006-01-01
Full Text Available Modeling of the solubility of amino acids and purine and pyrimidine bases with a set of sixteen molecular descriptors has been thoroughly analyzed to detect and understand the reasons for anomalies in the description of this property for these two classes of compounds. Unsatisfactory modeling can be ascribed to incomplete collateral data, i.e, to the fact that there is insufficient data known about the behavior of these compounds in solution. This is usually because intermolecular forces cannot be modeled. The anomalous modeling can be detected from the rather large values of the standard deviation of the estimates of the whole set of compounds, and from the unsatisfactory modeling of some of the subsets of these compounds. Thus the detected abnormalities can be used (i to get an idea about weak intermolecular interactions such as hydration, self-association, the hydrogen-bond phenomena in solution, and (ii to reshape the molecular descriptors with the introduction of parameters that allow better modeling. This last procedure should be used with care, bearing in mind that the solubility phenomena is rather complex.
Maximum Leaf Spanning Trees of Growing Sierpinski Networks Models
Yao, Bing; Xu, Jin
2016-01-01
The dynamical phenomena of complex networks are very difficult to predict from local information due to the rich microstructures and corresponding complex dynamics. On the other hands, it is a horrible job to compute some stochastic parameters of a large network having thousand and thousand nodes. We design several recursive algorithms for finding spanning trees having maximal leaves (MLS-trees) in investigation of topological structures of Sierpinski growing network models, and use MLS-trees to determine the kernels, dominating and balanced sets of the models. We propose a new stochastic method for the models, called the edge-cumulative distribution, and show that it obeys a power law distribution.
Maximum likelihood reconstruction for Ising models with asynchronous updates
Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser
2012-01-01
We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...
Limited dependent variable models for panel data
Charlier, E.
1997-01-01
Many economic phenomena require limited variable models for an appropriate treatment. In addition, panel data models allow the inclusion of unobserved individual-specific effects. These models are combined in this thesis. Distributional assumptions in the limited dependent variable models are
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Pier A Mello; Eugene Kogan
2002-02-01
We present a maximum-entropy model for the transport of waves through a classically chaotic cavity in the presence of absorption. The entropy of the -matrix statistical distribution is maximized, with the constraint $\\langle {\\rm Tr}SS^{\\dagger}\\rangle = n: n$ is the dimensionality of , and 0 ≤ ≤ 1. For = 1 the -matrix distribution concentrates on the unitarity sphere and we have no absorption; for = 0 the distribution becomes a delta function at the origin and we have complete absorption. For strong absorption our result agrees with a number of analytical calculations already given in the literature. In that limit, the distribution of the individual (angular) transmission and reﬂection coefﬁcients becomes exponential – Rayleigh statistics – even for = 1. For ≫ 1 Rayleigh statistics is attained even with no absorption; here we extend the study to < 1. The model is compared with random-matrix-theory numerical simulations: it describes the problem very well for strong absorption, but fails for moderate and weak absorptions. The success of the model for strong absorption is understood in the light of a central-limit theorem. For weak absorption, some important physical constraint is missing in the construction of the model.
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.
2015-01-01
Our recently developed multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process has been upgraded with respect to its predictive capabilities regarding the process optimization for the attainment of maximum ballistic limit within the weld. The original model consists of six modules, each dedicated to handling a specific aspect of the GMAW process, i.e., (a) electro-dynamics of the welding gun; (b) radiation-/convection-controlled heat transfer from the electric arc to the workpiece and mass transfer from the filler metal consumable electrode to the weld; (c) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (d) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; (e) spatial distribution of the as-welded material mechanical properties; and (f) spatial distribution of the material ballistic limit. In the present work, the model is upgraded through the introduction of the seventh module in recognition of the fact that identification of the optimum GMAW process parameters relative to the attainment of the maximum ballistic limit within the weld region entails the use of advanced optimization and statistical sensitivity analysis methods and tools. The upgraded GMAW process model is next applied to the case of butt welding of MIL A46100 (a prototypical high-hardness armor-grade martensitic steel) workpieces using filler metal electrodes made of the same material. The predictions of the upgraded GMAW process model pertaining to the spatial distribution of the material microstructure and ballistic limit-controlling mechanical properties within the MIL A46100 butt weld are found to be consistent with general expectations and prior observations.
Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.
2013-03-01
An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.
Reina-Campos, Marta; Kruijssen, J. M. Diederik
2017-08-01
We present a simple, self-consistent model to predict the maximum masses of giant molecular clouds (GMCs), stellar clusters and high-redshift clumps as a function of the galactic environment. Recent works have proposed that these maximum masses are set by shearing motions and centrifugal forces, but we show that this idea is inconsistent with the low masses observed across an important range of local-Universe environments, such as low-surface density galaxies and galaxy outskirts. Instead, we propose that feedback from young stars can disrupt clouds before the global collapse of the shear-limited area is completed. We develop a shear-feedback hybrid model that depends on three observable quantities: the gas surface density, the epicylic frequency and the Toomre parameter. The model is tested in four galactic environments: the Milky Way, the Local Group galaxy M31, the spiral galaxy M83 and the high-redshift galaxy zC406690. We demonstrate that our model simultaneously reproduces the observed maximum masses of GMCs, clumps and clusters in each of these environments. We find that clouds and clusters in M31 and in the Milky Way are feedback-limited beyond radii of 8.4 and 4 kpc, respectively, whereas the masses in M83 and zC406690 are shear-limited at all radii. In zC406690, the maximum cluster masses decrease further due to their inspiral by dynamical friction. These results illustrate that the maximum masses change from being shear-limited to being feedback-limited as galaxies become less gas rich and evolve towards low shear. This explains why high-redshift clumps are more massive than GMCs in the local Universe.
Kinoshita, Takashi, E-mail: tkino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Kawayama, Tomotaka, E-mail: kawayama_tomotaka@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Imamura, Youhei, E-mail: mamura_youhei@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Sakazaki, Yuki, E-mail: sakazaki@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Hirai, Ryo, E-mail: hirai_ryou@kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Ishii, Hidenobu, E-mail: shii_hidenobu@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Suetomo, Masashi, E-mail: jin_t_f_c@yahoo.co.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Matsunaga, Kazuko, E-mail: kmatsunaga@kouhoukai.or.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Azuma, Koichi, E-mail: azuma@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Fujimoto, Kiminori, E-mail: kimichan@med.kurume-u.ac.jp [Department of Radiology, Kurume University School of Medicine, Kurume (Japan); Hoshino, Tomoaki, E-mail: hoshino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan)
2015-04-15
Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and
Optimal Disturbance Accommodation with Limited Model Information
Farokhi, F; Johansson, K H
2011-01-01
The design of optimal dynamic disturbance-accommodation controller with limited model information is considered. We adapt the family of limited model information control design strategies, defined earlier by the authors, to handle dynamic-controllers. This family of limited model information design strategies construct subcontrollers distributively by accessing only local plant model information. The closed-loop performance of the dynamic-controllers that they can produce are studied using a performance metric called the competitive ratio which is the worst case ratio of the cost a control design strategy to the cost of the optimal control design with full model information.
Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem.
Ning, Jianguo; Li, Yanmei; Yu, Wen
2015-06-12
Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium--transistor chips rather than bio-molecules--the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time.
Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem
Jianguo Ning
2015-06-01
Full Text Available Molecular computers (also called DNA computers, as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model on System-on-a-Programmable-Chip (SOPC architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time.
Asymptotic properties of maximum likelihood estimators in models with multiple change points
He, Heping; 10.3150/09-BEJ232
2011-01-01
Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit;
2014-01-01
to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...
GA-BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
Lu Junming; Lin Zhenghui
2002-01-01
In this paper, the glitching activity and process variations in the maximum power dissipation estimation of CMOS circuits are introduced. Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view. The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02. Compared with the traditional Monte Carlo-based technique, the new approach presented in this paper is more effective.
GA—BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
LuJunming; LinZhenghui
2002-01-01
In this paper,the glitching activity and process variations in the maximum power dissipation estimation of CMOS circulits are introduced.Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view.The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02.Compared with the traditional Monte Carlo-based technique,the new approach presented in this paper is more effective.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Conditional maximum likelihood estimation in semiparametric transformation model with LTRC data.
Chen, Chyong-Mei; Shen, Pao-Sheng
2017-02-06
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.
Modeling the Maximum Spreading of Liquid Droplets Impacting Wetting and Nonwetting Surfaces.
Lee, Jae Bong; Derome, Dominique; Guyer, Robert; Carmeliet, Jan
2016-02-09
Droplet impact has been imaged on different rigid, smooth, and rough substrates for three liquids with different viscosity and surface tension, with special attention to the lower impact velocity range. Of all studied parameters, only surface tension and viscosity, thus the liquid properties, clearly play a role in terms of the attained maximum spreading ratio of the impacting droplet. Surface roughness and type of surface (steel, aluminum, and parafilm) slightly affect the dynamic wettability and maximum spreading at low impact velocity. The dynamic contact angle at maximum spreading has been identified to properly characterize this dynamic spreading process, especially at low impact velocity where dynamic wetting plays an important role. The dynamic contact angle is found to be generally higher than the equilibrium contact angle, showing that statically wetting surfaces can become less wetting or even nonwetting under dynamic droplet impact. An improved energy balance model for maximum spreading ratio is proposed based on a correct analytical modeling of the time at maximum spreading, which determines the viscous dissipation. Experiments show that the time at maximum spreading decreases with impact velocity depending on the surface tension of the liquid, and a scaling with maximum spreading diameter and surface tension is proposed. A second improvement is based on the use of the dynamic contact angle at maximum spreading, instead of quasi-static contact angles, to describe the dynamic wetting process at low impact velocity. This improved model showed good agreement compared to experiments for the maximum spreading ratio versus impact velocity for different liquids, and a better prediction compared to other models in literature. In particular, scaling according to We(1/2) is found invalid for low velocities, since the curves bend over to higher maximum spreading ratios due to the dynamic wetting process.
A MODEL FOR THE "MAXIMUM CAPACITY" OF ROOMS OR OF SPACE
Zi-yan WU
2003-01-01
A discrete optimum mathematical model to derive the "maximum capacity" of people in a roomor in a space used for public gatherings is developed. There are two outcomes in the model. One isfocused on whether the person farthest from exits can escape from the room. The other concentrateson the evacuation time of all the people in the room. According to the results of the two outcomes, amore reasonable "maximum capacity" can be worked out in a simple way.
Applying the maximum information principle to cell transmission model of tra-ffic flow
刘喜敏; 卢守峰
2013-01-01
This paper integrates the maximum information principle with the Cell Transmission Model (CTM) to formulate the velo-city distribution evolution of vehicle traffic flow. The proposed discrete traffic kinetic model uses the cell transmission model to cal-culate the macroscopic variables of the vehicle transmission, and the maximum information principle to examine the velocity distri-bution in each cell. The velocity distribution based on maximum information principle is solved by the Lagrange multiplier method. The advantage of the proposed model is that it can simultaneously calculate the hydrodynamic variables and velocity distribution at the cell level. An example shows how the proposed model works. The proposed model is a hybrid traffic simulation model, which can be used to understand the self-organization phenomena in traffic flows and predict the traffic evolution.
Beltrán, M C; Romero, T; Althaus, R L; Molina, M P
2013-01-01
The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects...
A MAXIMUM ENTROPY CHUNKING MODEL WITH N-FOLD TEMPLATE CORRECTION
无
2007-01-01
This letter presents a new chunking method based on Maximum Entropy (ME) model with N-fold template correction model. First two types of machine learning models are described. Based on the analysis of the two models, then the chunking model which combines the profits of conditional probability model and rule based model is proposed. The selection of features and rule templates in the chunking model is discussed. Experimental results for the CoNLL-2000 corpus show that this approach achieves impressive accuracy in terms of the F-score: 92.93%. Compared with the ME model and ME Markov model, the new chunking model achieves better performance.
Potential distribution of Xylella fastidiosa in Italy: a maximum entropy model
Luciano BOSSO
2016-05-01
Full Text Available Species distribution models may provide realistic scenarios to explain the influence of bioclimatic variables in the context of emerging plant pathogens. Xylella fastidiosa is a xylem-limited Gram-negative bacterium causing severe diseases in many plant species. We developed a maximum entropy model for X. fastidiosa in Italy. Our objectives were to carry out a preliminary analysis of the species’ potential geographical distribution and determine which eco-geographical variables may favour its presence in other Italian regions besides Apulia. The analysis of single variable contribution showed that precipitation of the driest (40.3% and wettest (30.4% months were the main factors influencing model performance. Altitude, precipitation of warmest quarter, mean temperature of coldest quarter, and land cover provided a total contribution of 19.5%. Based on the model predictions, X. fastidiosa has a high probability (> 0.8 of colonizing areas characterized by: i low altitude (0–150 m a.s.l.; ii precipitations in the driest month < 10 mm, in the wettest month ranging between 80–110 mm and during the warmest quarter < 60 mm; iii mean temperature of coldest quarter ≥ 8°C; iv agricultural areas comprising intensive agriculture, complex cultivation patterns, olive groves, annual crops associated with permanent crops, orchards and vineyards; forest (essentially oak woodland; and Mediterranean shrubland. Species distribution models showed a high probability of X. fastidiosa occurrence in the regions of Apulia, Calabria, Basilicata, Sicily, Sardinia and coastal areas of Campania, Lazio and south Tuscany. Maxent models achieved excellent levels of predictive performance according to area under curve (AUC, true skill statistic (TSS and minimum difference between training and testing AUC data (AUCdiff. Our study indicated that X. fastidiosa has the potential to overcome the current boundaries of distribution and affect areas of Italy outside Apulia.
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
A note on the maximum likelihood estimator in the gamma regression model
Jerzy P. Rydlewski
2009-01-01
Full Text Available This paper considers a nonlinear regression model, in which the dependent variable has the gamma distribution. A model is considered in which the shape parameter of the random variable is the sum of continuous and algebraically independent functions. The paper proves that there is exactly one maximum likelihood estimator for the gamma regression model.
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique
Daniel Maposa
2016-03-01
Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate.Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value
Matrix model calculations beyond the spherical limit
Ambjoern, J. (Niels Bohr Institute, Copenhagen (Denmark)); Chekhov, L. (L.P.T.H.E., Universite Pierre et Marie Curie, 75 - Paris (France)); Kristjansen, C.F. (Niels Bohr Institute, Copenhagen (Denmark)); Makeenko, Yu. (Institute of Theoretical and Experimental Physics, Moscow (Russian Federation))
1993-08-30
We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)
The problem of the maximum volumes and particle horizon in the Friedmann universe model
Gong, S. M.
1989-08-01
The maximum volume of the closed Friedmann universe is further investigated and is shown to be 2 x pi squared x R cubed (t), instead of pi squared x R cubed (t) as found previously. This discrepancy comes from the incomplete use of the volume formula of 3-dimensional spherical space in the astronomical literature. Mathematically, the maximum volume exists at any cosmic time t in a 3-dimensional spherical case. However, the Friedmann closed universe in expansion reaches its maximum volume only at the time of the maximum scale factor. The particle horizon has no limitation for the farthest objects in the closed Friedmann universe if the proper distance of objects is compared with the particle horizon as is should be. This leads to absurdity if the luminosity distance of objects is compared with the proper distance of the particle horizon.
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Hyland, D. C.
1983-01-01
A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).
Matsumoto, Atsushi; Hasegawa, Masaru; Matsui, Keiju
In this paper, a novel position sensorless control method for interior permanent magnet synchronous motors (IPMSMs) that is based on a novel flux model suitable for maximum torque control has been proposed. Maximum torque per ampere (MTPA) control is often utilized for driving IPMSMs with the maximum efficiency. In order to implement this control, generally, the parameters are required to be accurate. However, the inductance varies dramatically because of magnetic saturation, which has been one of the most important problems in recent years. Therefore, the conventional MTPA control method fails to achieve maximum efficiency for IPMSMs because of parameter mismatches. In this paper, first, a novel flux model has been proposed for realizing the position sensorless control of IPMSMs, which is insensitive to Lq. In addition, in this paper, it has been shown that the proposed flux model can approximately estimate the maximum torque control (MTC) frame, which as a new coordinate aligned with the current vector for MTPA control. Next, in this paper, a precise estimation method for the MTC frame has been proposed. By this method, highly accurate maximum torque control can be achieved. A decoupling control algorithm based on the proposed model has also been addressed in this paper. Finally, some experimental results demonstrate the feasibility and effectiveness of the proposed method.
Kimble, Michael C.; White, Ralph E.
1991-01-01
A mathematical model of a hydrogen/oxygen alkaline fuel cell is presented that can be used to predict the polarization behavior under various power loads. The major limitations to achieving high power densities are indicated and methods to increase the maximum attainable power density are suggested. The alkaline fuel cell model describes the phenomena occurring in the solid, liquid, and gaseous phases of the anode, separator, and cathode regions based on porous electrode theory applied to three phases. Fundamental equations of chemical engineering that describe conservation of mass and charge, species transport, and kinetic phenomena are used to develop the model by treating all phases as a homogeneous continuum.
Sung Woo Park; Byung Kwan Oh; Hyo Seon Park
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this...
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Exact computation of the maximum-entropy potential of spiking neural-network models.
Cofré, R; Cessac, B
2014-05-01
Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.
NONSMOOTH MODEL FOR PLASTIC LIMIT ANALYSIS AND ITS SMOOTHING ALGORITHM
LI Jian-yu; PAN Shao-hua; LI Xing-si
2006-01-01
By means of Lagrange duality theory of the convex program, a dual problem of Hill's maximum plastic work principle under Mises' yield condition has been derived and whereby a non-differentiable convex optimization model for the limit analysis is developed. With this model, it is not necessary to linearize the yield condition and its discrete form becomes a minimization problem of the sum of Euclidean norms subject to linear constraints. Aimed at resolving the non-differentiability of Euclidean norms, a smoothing algorithm for the limit analysis of perfect-plastic continuum media is proposed.Its efficiency is demonstrated by computing the limit load factor and the collapse state for some plane stress and plain strain problems.
On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.
Fischer, Gerhard H.
1981-01-01
Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)
Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows
Karacaören, Burak; Janss, Luc; Kadarmideen, Haja
Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction
Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.
2009-01-01
There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…
Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.
Yeadon, Maurice R; King, Mark A; Wilson, Cassie
2006-01-01
The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.
Animal models of preeclampsia; uses and limitations.
McCarthy, F P
2012-01-31
Preeclampsia remains a leading cause of maternal and fetal morbidity and mortality and has an unknown etiology. The limited progress made regarding new treatments to reduce the incidence and severity of preeclampsia has been attributed to the difficulties faced in the development of suitable animal models for the mechanistic research of this disease. In addition, animal models need hypotheses on which to be based and the slow development of testable hypotheses has also contributed to this poor progress. The past decade has seen significant advances in our understanding of preeclampsia and the development of viable reproducible animal models has contributed significantly to these advances. Although many of these models have features of preeclampsia, they are still poor overall models of the human disease and limited due to lack of reproducibility and because they do not include the complete spectrum of pathophysiological changes associated with preeclampsia. This review aims to provide a succinct and comprehensive assessment of current animal models of preeclampsia, their uses and limitations with particular attention paid to the best validated and most comprehensive models, in addition to those models which have been utilized to investigate potential therapeutic interventions for the treatment or prevention of preeclampsia.
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting on...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model
Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong
2016-07-01
We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T1 and T2 (autonomous heat engine performing work against the external driving force. Linearity of the system enables us to examine thermodynamic properties of the engine analytically. We find that the efficiency of the engine at maximum power ηM P is given by ηM P=1 -√{T2/T1 } . This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of ηM P to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η =η ¯ and increases monotonically until it reaches plateaus when η ≤ηL and η ≥ηR with model-dependent parameters ηR and ηL.
Incorporating the Hayflick Limit into a model of Telomere Dynamics
Cyrenne, Benoit M
2013-01-01
A model of telomere dynamics is proposed and examined. Our model, which extends a previously introduced two-compartment model that incorporates stem cells as progenitors of new cells, imposes the Hayflick Limit, the maximum number of cell divisions that are possible. This new model leads to cell populations for which the average telomere length is not necessarily a monotonically decreasing function of time, in contrast to previously published models. We provide a phase diagram indicating where such results would be expected. In addition, qualitatively different results are obtained for the evolution of the total cell population. Last, in comparison to available leukocyte baboon data, this new model is shown to provide a better fit to biological data.
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Madsen, Henrik; Rasmussen, Peter F.; Rosbjerg, Dan
1997-01-01
Two different models for analyzing extreme hydrologic events, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto distribution for modeling threshold exceedances corresponding to a generalized extreme value...... model with ML estimation for large positive shape parameters. Since heavy-tailed distributions, corresponding to negative shape parameters, are far the most common in hydrology, the PDS model generally is to be preferred for at-site quantile estimation....... distribution for annual maxima. The performance of the two models in terms of the uncertainty of the T-year event estimator is evaluated in the cases of estimation with, respectively, the maximum likelihood (ML) method, the method of moments (MOM), and the method of probability weighted moments (PWM...
Weak diffusion limits of dynamic conditional correlation models
Hafner, Christian M.; Laurent, Sebastien; Violante, Francesco
The properties of dynamic conditional correlation (DCC) models are still not entirely understood. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized by a dif......The properties of dynamic conditional correlation (DCC) models are still not entirely understood. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized...... by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a non-degenerate diffusion limit can be obtained. Alternative sets of conditions are considered...... for the rate of convergence of the parameters, obtaining time-varying but deterministic variances and/or correlations. A Monte Carlo experiment confirms that the quasi approximate maximum likelihood (QAML) method to estimate the diffusion parameters is inconsistent for any fixed frequency, but that it may...
Mona Nazeri; Kamaruzaman Jusoff; Nima Madani; Ahmad Rodzi Mahmud; Abdul Rani Bahman; Lalit Kumar
2012-01-01
One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify ...
A technique for estimating maximum harvesting effort in a stochastic fishery model
Ram Rup Sarkar; J Chattopadhayay
2003-06-01
Exploitation of biological resources and the harvest of population species are commonly practiced in fisheries, forestry and wild life management. Estimation of maximum harvesting effort has a great impact on the economics of fisheries and other bio-resources. The present paper deals with the problem of a bioeconomic fishery model under environmental variability. A technique for finding the maximum harvesting effort in fluctuating environment has been developed in a two-species competitive system, which shows that under realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain the inaccessible parameters of the system in a systematic way. Such studies may help resource managers to get an idea for controlling the system.
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Maximum power point tracking for photovoltaic system using model predictive control
Ma, Chao; Li, Ning; Li, Shaoyuan [Shanghai Jiao Tong Univ., Shanghai (China). Key Lab. of System Control and Information Processing
2013-07-01
In this paper, T-G-P model is built to find maximum power point according to light intensity and temperature, making it easier and more clearly for photovoltaic system to track the MPP. A predictive controller considering constraints for safe operation is designed. The simulation results show that the system can track MPP quickly, accurately and effectively.
Jie Li DING; Xi Ru CHEN
2006-01-01
For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.
Uri UDIN
2014-06-01
Full Text Available This article proposes usage of Pontryagin maximum principle for parametrical identification of mathematical vessel’s model. Proposed method has a special perspective for identification in real time mode, when the parameters identified can be used for forecasting of coming maneuvers.
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data
Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.
2003-01-01
The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…
Enders, Craig K.
2001-01-01
Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…
Limitations of Animal Models of Parkinson's Disease
J. A. Potashkin
2011-01-01
Full Text Available Most cases of Parkinson's disease (PD are sporadic. When choosing an animal model for idiopathic PD, one must consider the extent of similarity or divergence between the physiology, anatomy, behavior, and regulation of gene expression between humans and the animal. Rodents and nonhuman primates are used most frequently in PD research because when a Parkinsonian state is induced, they mimic many aspects of idiopathic PD. These models have been useful in our understanding of the etiology of the disease and provide a means for testing new treatments. However, the current animal models often fall short in replicating the true pathophysiology occurring in idiopathic PD, and thus results from animal models often do not translate to the clinic. In this paper we will explain the limitations of animal models of PD and why their use is inappropriate for the study of some aspects of PD.
Applying rough sets in word segmentation disambiguation based on maximum entropy model
无
2006-01-01
To solve the complicated feature extraction and long distance dependency problem in Word Segmentation Disambiguation ( WSD), this paper proposes to apply rough sets in WSD based on the Maximum Entropy model. Firstly, rough set theory is applied to extract the complicated features and long distance features, even from noise or inconsistent corpus. Secondly, these features are added into the Maximum Entropy model, and consequently, the feature weights can be assigned according to the performance of the whole disambiguation model. Finally, the semantic lexicon is adopted to build class-based rough set features to overcome data sparseness. The experiment indicated that our method performed better than previous models, which got top rank in WSD in 863 Evaluation in 2003. This system ranked first and second respectively in MSR and PKU open test in the Second International Chinese Word Segmentation Bakeoff held in 2005.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-02-22
The objectives of this report are; To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; Estimate the maximum concentration in a well located outside of the fill material; and Perform a sensitivity analysis of key parameters.
SIMULATING MODEL OF SYSTEM FOR MAXIMUM OUTPUT POWER OF SOLAR BATTERY
Abdul Majid Al-Khatib
2005-01-01
Full Text Available Simulating model and algorithm for control of electric power converter of a solar battery are proposed in the paper. Control device of D.C. step-down converter with pulse-width modulation is designed on microprocessor basis. Simulating model permits to investigate various operational modes of a solar battery, demonstrates a process with maximum power mode and is characterized by convenient user’s interface.
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
Che Wan Jasimah bt Wan Mohamed Radzi; Huang Hui; Hashem Salarzadeh Jenatabadi
2016-01-01
Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM) with Maximum likelihood (ML) and Bayesian predictors. This framework includes parental socioeconomic status, household food security, parental lifestyle, and children’s lifestyle. The sample for this study involves 452 volunteer Chinese families with children 7–12 years old. The experimental results a...
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
Domire, Zachary J; Challis, John H
2010-12-01
The maximum velocity of shortening of a muscle is an important parameter in musculoskeletal models. The most commonly used values are derived from animal studies; however, these values are well above the values that have been reported for human muscle. The purpose of this study was to examine the sensitivity of simulations of maximum vertical jumping performance to the parameters describing the force-velocity properties of muscle. Simulations performed with parameters derived from animal studies were similar to measured jump heights from previous experimental studies. While simulations performed with parameters derived from human muscle were much lower than previously measured jump heights. If current measurements of maximum shortening velocity in human muscle are correct, a compensating error must exist. Of the possible compensating errors that could produce this discrepancy, it was concluded that reduced muscle fibre excursion is the most likely candidate.
Optimal Control Design with Limited Model Information
Farokhi, F; Johansson, K H
2011-01-01
We introduce the family of limited model information control design methods, which construct controllers by accessing the plant's model in a constrained way, according to a given design graph. We investigate the achievable closed-loop performance of discrete-time linear time-invariant plants under a separable quadratic cost performance measure with structured static state-feedback controllers. We find the optimal control design strategy (in terms of the competitive ratio and domination metrics) when the control designer has access to the local model information and the global interconnection structure of the plant-to-be-controlled. At last, we study the trade-off between the amount of model information exploited by a control design method and the best closed-loop performance (in terms of the competitive ratio) of controllers it can produce.
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Urban expressway traffic state forecasting based on multimode maximum entropy model
无
2010-01-01
The accurate and timely traffic state prediction has become increasingly important for the traffic participants,especially for the traffic managements. In this paper,the traffic state is described by Micro-LOS,and a direct prediction method is introduced. The development of the proposed method is based on Maximum Entropy (ME) models trained for multiple modes. In the Multimode Maximum Entropy (MME) framework,the different features like temporal and spatial features of traffic systems,regional traffic state are integrated simultaneously,and the different state behaviors based on 14 traffic modes defined by average speed according to the date-time division are also dealt with. The experiments based on the real data in Beijing expressway prove that the MME models outperforms the already existing model in both effectiveness and robustness.
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Cofre, Rodrigo
2014-01-01
Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.
Limitations of modeling snow in ski resorts
Steiger, Robert; Abegg, Bruno
2016-04-01
The body of literature on snow modeling in a ski area operations context has been growing over the last decades in an accelerating speed. The majority of snow model applications for ski areas can be found in the climate change impacts literature. These studies differ in many aspects: the type of model used; the meteorological variables used in the models; the spatial and temporal resolution of the meteorological variables; the method how the climate change signal is derived and applied in the model concept; the number of climate models and emission scenarios used and consequently the handling of uncertainties; the indicators used to interpret the impacts for the skiing tourism industry; the incorporation of adaptation measures (e.g. snowmaking); and the geographical scale of analysis. In this contribution we will present a review of approaches used for modeling snow conditions in a ski area context. The major limitations both from a scientific as well as from a users' perspective will be discussed and solutions for shortcomings of existing approaches will be presented.
Refsnider, Kurt A.; Laabs, Benjamin J. C.; Plummer, Mitchell A.; Mickelson, David M.; Singer, Bradley S.; Caffee, Marc W.
2008-01-01
During the last glacial maximum (LGM), the western Uinta Mountains of northeastern Utah were occupied by the Western Uinta Ice Field. Cosmogenic 10Be surface-exposure ages from the terminal moraine in the North Fork Provo Valley and paired 26Al and 10Be ages from striated bedrock at Bald Mountain Pass set limits on the timing of the local LGM. Moraine boulder ages suggest that ice reached its maximum extent by 17.4 ± 0.5 ka (± 2σ). 10Be and 26Al measurements on striated bedrock from Bald Mountain Pass, situated near the former center of the ice field, yield a mean 26Al/ 10Be ratio of 5.7 ± 0.8 and a mean exposure age of 14.0 ± 0.5 ka, which places a minimum-limiting age on when the ice field melted completely. We also applied a mass/energy-balance and ice-flow model to investigate the LGM climate of the western Uinta Mountains. Results suggest that temperatures were likely 5 to 7°C cooler than present and precipitation was 2 to 3.5 times greater than modern, and the western-most glaciers in the range generally received more precipitation when expanding to their maximum extent than glaciers farther east. This scenario is consistent with the hypothesis that precipitation in the western Uintas was enhanced by pluvial Lake Bonneville during the last glaciation.
Potvin, Jean; Goldbogen, Jeremy A; Shadwick, Robert E
2012-01-01
Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti) and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae) exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals), the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae), fin (Balaenoptera physalus), blue (Balaenoptera musculus) and minke (Balaenoptera acutorostrata) whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting individual prey
Jean Potvin
Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting
Toward a mechanistic modeling of nitrogen limitation for photosynthesis
Xu, C.; Fisher, R. A.; Travis, B. J.; Wilson, C. J.; McDowell, N. G.
2011-12-01
The nitrogen limitation is an important regulator for vegetation growth and global carbon cycle. Most current ecosystem process models simulate nitrogen effects on photosynthesis based on a prescribed relationship between leaf nitrogen and photosynthesis; however, there is a large amount of variability in this relationship with different light, temperature, nitrogen availability and CO2 conditions, which can affect the reliability of photosynthesis prediction under future climate conditions. To account for the variability in nitrogen-photosynthesis relationship under different environmental conditions, in this study, we developed a mechanistic model of nitrogen limitation for photosynthesis based on nitrogen trade-offs among light absorption, electron transport, carboxylization and carbon sink. Our model shows that strategies of nitrogen storage allocation as determined by tradeoff among growth and persistence is a key factor contributing to the variability in relationship between leaf nitrogen and photosynthesis. Nitrogen fertilization substantially increases the proportion of nitrogen in storage for coniferous trees but much less for deciduous trees, suggesting that coniferous trees allocate more nitrogen toward persistence compared to deciduous trees. The CO2 fertilization will cause lower nitrogen allocation for carboxylization but higher nitrogen allocation for storage, which leads to a weaker relationship between leaf nitrogen and maximum photosynthesis rate. Lower radiation will cause higher nitrogen allocation for light absorption and electron transport but less nitrogen allocation for carboxylyzation and storage, which also leads to weaker relationship between leaf nitrogen and maximum photosynthesis rate. At the same time, lower growing temperature will cause higher nitrogen allocation for carboxylyzation but lower allocation for light absorption, electron transport and storage, which leads to a stronger relationship between leaf nitrogen and maximum
A pairwise maximum entropy model accurately describes resting-state human brain networks.
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks.
Madsen, Henrik; Pearson, Charles P.; Rosbjerg, Dan
1997-01-01
Two regional estimation schemes, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto (GP) distribution for modeling threshold exceedances corresponding to a generalized extreme value (GEV) distribution...... for annual maxima. First, the accuracy of PDS/GP and AMS/GEV regional index-flood T-year event estimators are compared using Monte Carlo simulations. For estimation in typical regions assuming a realistic degree of heterogeneity, the PDS/GP index-flood model is more efficient. The regional PDS and AMS...
Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.
Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z
2007-08-15
Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.
Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
Jonathan Borwein
2014-02-01
Full Text Available We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described.
Using maximum topology matching to explore differences in species distribution models
Poco, Jorge; Doraiswamy, Harish; Talbert, Marian K.; Morisette, Jeffrey; Silva, Claudio
2015-01-01
Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.
Animal models of epilepsy: use and limitations
Kandratavicius L
2014-09-01
Full Text Available Ludmyla Kandratavicius,1 Priscila Alves Balista,1 Cleiton Lopes-Aguiar,1 Rafael Naime Ruggiero,1 Eduardo Henrique Umeoka,2 Norberto Garcia-Cairasco,2 Lezio Soares Bueno-Junior,1 Joao Pereira Leite11Department of Neurosciences and Behavior, 2Department of Physiology, Ribeirao Preto School of Medicine, University of Sao Paulo, Ribeirao Preto, BrazilAbstract: Epilepsy is a chronic neurological condition characterized by recurrent seizures that affects millions of people worldwide. Comprehension of the complex mechanisms underlying epileptogenesis and seizure generation in temporal lobe epilepsy and other forms of epilepsy cannot be fully acquired in clinical studies with humans. As a result, the use of appropriate animal models is essential. Some of these models replicate the natural history of symptomatic focal epilepsy with an initial epileptogenic insult, which is followed by an apparent latent period and by a subsequent period of chronic spontaneous seizures. Seizures are a combination of electrical and behavioral events that are able to induce chemical, molecular, and anatomic alterations. In this review, we summarize the most frequently used models of chronic epilepsy and models of acute seizures induced by chemoconvulsants, traumatic brain injury, and electrical or sound stimuli. Genetic models of absence seizures and models of seizures and status epilepticus in the immature brain were also examined. Major uses and limitations were highlighted, and neuropathological, behavioral, and neurophysiological similarities and differences between the model and the human equivalent were considered. The quest for seizure mechanisms can provide insights into overall brain functions and consciousness, and animal models of epilepsy will continue to promote the progress of both epilepsy and neurophysiology research.Keywords: epilepsy, animal model, pilocarpine, kindling, neurodevelopment
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Richard R Stein
2015-07-01
Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Stein, Richard R; Marks, Debora S; Sander, Chris
2015-07-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
Maximum efficiency of state-space models of nanoscale energy conversion devices
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed
Pitchford, J.; Strager, M.; Riley, A.; Lin, L.; Anderson, J.
2015-03-01
We used maximum entropy to model streambank erosion potential (SEP) in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs), and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.
The SIS and SIR stochastic epidemic models: a maximum entropy approach.
Artalejo, J R; Lopez-Herrero, M J
2011-12-01
We analyze the dynamics of infectious disease spread by formulating the maximum entropy (ME) solutions of the susceptible-infected-susceptible (SIS) and the susceptible-infected-removed (SIR) stochastic models. Several scenarios providing helpful insight into the use of the ME formalism for epidemic modeling are identified. The ME results are illustrated with respect to several descriptors, including the number of recovered individuals and the time to extinction. An application to infectious data from outbreaks of extended spectrum beta lactamase (ESBL) in a hospital is also considered.
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Minjeong eJeon
2014-04-01
Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.
Maximum Effective Hole Mathematical Model and Exact Solution for Commingled Reservoir
孙贺东; 刘磊; 周芳德; 高承泰
2003-01-01
The maximum effective hole-diameter mathematical model describing the flow of slightly compressible fluid through a commingled reservoir was solved rigorously with consideration of wellbore storage and different skin factors. The exact solutions for wellbore pressure and the production rate obtained from layer j for a well production at a constant rate from a radial drainage area with infinite and constant pressure and no flow outer boundary condition were expressed in terms of ordinary Bessel functions. These solutions were computed numerically by the Crump''s numerical inversion method and the behavior of systems was studied as a function of various reservoir parameters. The model was compared with the real wellbore radii model. The new model is numerically stable when the skin factor is positive and negative, but the real wellbore radii model is numerically stable only when the skin factor is positive.
The early maximum likelihood estimation model of audiovisual integration in speech perception
Andersen, Tobias
2015-01-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
GaoChunwen; XuJingzhen; RichardSinding-Larsen
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Wenjun Huang
2017-01-01
Full Text Available Mechanical extending limit in horizontal drilling means the maximum horizontal extending length of a horizontal well under certain ground and down-hole mechanical constraint conditions. Around this concept, the constrained optimization model of mechanical extending limits is built and simplified analytical results for pick-up and slack-off operations are deduced. The horizontal extending limits for kinds of tubular strings under different drilling parameters are calculated and drawn. To improve extending limits, an optimal design model of drill strings is built and applied to a case study. The results indicate that horizontal extending limits are underestimated a lot when the effects of friction force on critical helical buckling loads are neglected. Horizontal extending limits firstly increase and tend to stable values with vertical depths. Horizontal extending limits increase faster but finally become smaller with the increase of horizontal pushing forces for tubular strings of smaller modulus-weight ratio. Sliding slack-off is the main limit operation and high axial friction is the main constraint factor constraining horizontal extending limits. A sophisticated installation of multiple tubular strings can greatly inhibit helical buckling and increase horizontal extending limits. The optimal design model is called only once to obtain design results, which greatly increases the calculation efficiency.
MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India
K Ramesh; R Anitha
2014-06-01
In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short-term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1°C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1°C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87°C for day-one to 1.27°C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2°C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
Iden, Sascha C.; Peters, Andre; Durner, Wolfgang
2015-11-01
The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.
Maximum entropy production: can it be used to constrain conceptual hydrological models?
M. C. Westhoff
2013-08-01
Full Text Available In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in literature, generally little guidance has been given on how to apply the principle. The aim of this paper is to use the maximum power principle – which is closely related to MEP – to constrain parameters of a simple conceptual (bucket model. Although, we had to conclude that conceptual bucket models could not be constrained with respect to maximum power, this study sheds more light on how to use and how not to use the principle. Several of these issues have been correctly applied in other studies, but have not been explained or discussed as such. While other studies were based on resistance formulations, where the quantity to be optimized is a linear function of the resistance to be identified, our study shows that the approach also works for formulations that are only linear in the log-transformed space. Moreover, we showed that parameters describing process thresholds or influencing boundary conditions cannot be constrained. We furthermore conclude that, in order to apply the principle correctly, the model should be (1 physically based; i.e. fluxes should be defined as a gradient divided by a resistance, (2 the optimized flux should have a feedback on the gradient; i.e. the influence of boundary conditions on gradients should be minimal, (3 the temporal scale of the model should be chosen in such a way that the parameter that is optimized is constant over the modelling period, (4 only when the correct feedbacks are implemented the fluxes can be correctly optimized and (5 there should be a trade-off between two or more fluxes. Although our application of the maximum power principle did
RESEARCH OF PINYIN-TO-CHARACTER CONVERSION BASED ON MAXIMUM ENTROPY MODEL
Zhao Yan; Wang Xiaolong; Liu Bingquan; Guan Yi
2006-01-01
This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion instead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical information. Two ME models were built based on simple and complex templates respectively, and the complex one gave better conversion result. Furthermore, conversion trigger pair of yA → yB/cB was proposed to extract the long-distance constrain feature from the corpus; and then Average Mutual Information (AMI) was used to select conversion trigger pair features which were added to the ME model. The experiment shows that conversion error of the ME with conversion trigger pairs is reduced by 4% on a small training corpus, comparing with HMM smoothed by absolute smoothing.
Maximum Simplified Dynamic Model of Grass Field Ecosystem With Two Variables
曾庆存; 卢佩生; 曾晓东
1994-01-01
Based on general consideration and analysis, a maximum simplified dynamic model of grass field ecosystem with a single species is developed. The model consists of two variables: grass biomass of grass field per unit area and soil wetness, and is suitable for describing their mutual interaction. Other factors such as physical-chemical characteristics of soil, precipitation, irrigation, sunlight, temperature and consumers, are taken into account as parameters in the dynamical system. Qualitative analysis of the model shows that grass biomass of a possible ecological regime is determined by the stable equilibrium state of the dynamical system. For the grass species interacting weakly with soil wetness the grass biomass continuously depends on the precipitation. While, for a species interacting strongly with soil wetness, grass biomass is abundant if precipitation is larger than some critical value; otherwise, it becomes a desertification regime with very little or even zero grass biomass. The model also sh
Kaiadi, Mehrzad; Tunestål, Per; Johansson, Bengt
2010-01-01
High EGR rates combined with turbocharging has been identified as a promising way to increase the maximum load and efficiency of heavy duty spark ignition Natural Gas engines. With stoichiometric conditions a three way catalyst can be used which means that regulated emissions can be kept at very low levels. Most of the heavy duty NG engines are diesel engines which are converted for SI operation. These engine's components are in common with the diesel-engine which put limits on higher exh...
A limit model for thermoelectric equations
Consiglieri, Luisa
2010-01-01
We analyze the asymptotic behavior corresponding to the arbitrary high conductivity of the heat in the thermoelectric devices. This work deals with a steady-state multidimensional thermistor problem, considering the Joule effect and both spatial and temperature dependent transport coefficients under some real boundary conditions in accordance with the Seebeck-Peltier-Thomson cross-effects. Our first purpose is that the existence of a weak solution holds true under minimal assumptions on the data, as in particular convex domains with Lipschitz boundary. The proof is based on a fixed point argument, compactness methods, and existence and regularity theory for elliptic scalar equations. In this process, we prove W^{1,p}-regularity for Neumann problem to an elliptic second order equation in divergence form with discontinuous coefficient by using the potential theory. The second one is to show the existence of a limit model illustrating the asymptotic situation.
IN VITRO COMPARISON OF MAXIMUM PRESSURE DEVELOPED BY IRRIGATION SYSTEMS IN A KIDNEY MODEL.
Proietti, Silvia; Dragos, Laurian; Somani, Bhaskar K; Butticè, Salvatore; Talso, Michele; Emiliani, Esteban; Baghdadi, Mohammed; Giusti, Guido; Traxer, Olivier
2017-04-05
To evaluate in vitro the maximum pressure generated in an artificial kidney model when people of different levels of strengths used various irrigation systems. Fifteen people were enrolled and divided in 3 groups based on their strengths. Individual strength was evaluated according to the maximum pressure each participant was able to achieve using an Encore™ Inflator. The irrigation systems evaluated were: T-FlowTM Dual Port, HilineTM, continuous flow single action pumping system (SAPSTM) with the system close and open, Irri-flo IITM, a simple 60-ml syringe and PeditrolTM . Each irrigation system was connected to URF-V2 ureteroscope, which was inserted into an artificial kidney model. Each participant was asked to produce the maximum pressure possible with every irrigation device. Pressure was measured with the working channel (WC) empty, with a laser fiber and a basket inside. The highest pressure was achieved with the 60 ml-syringe system and the lowest with SAPS continuous version system (with continuous irrigation open), compared to the other irrigation devices (p< 0.0001). Irrespective of the irrigation system, there was a significant difference in the pressure between the WC empty and when occupied with the laser fiber or the basket inside it (p<0.0001). The stratification between the groups showed that the most powerful group could produce the highest pressure in the kidney model with all the irrigation devices in almost any situation. The exception to this was the T-Flow system, which was the only device where no statistical differences were detected among these groups. The use of irrigation systems can often generate excessive pressure in an artificial kidney model, especially with an unoccupied WC of the ureteroscope. Depending on the strength of force applied, very high pressure can be generated by most irrigation devices irrespective of whether the scope is occupied or not.
The Research of Car-Following Model Based on Real-Time Maximum Deceleration
Longhai Yang
2015-01-01
Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.
Renal versus splenic maximum slope based perfusion CT modelling in patients with portal-hypertension
Fischer, Michael A. [University Hospital Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); Karolinska Institutet, Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Stockholm (Sweden); Brehmer, Katharina [Karolinska University Hospital Huddinge, Department of Radiology, Stockholm (Sweden); Svensson, Anders; Aspelin, Peter; Brismar, Torkel B. [Karolinska Institutet, Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Stockholm (Sweden); Karolinska University Hospital Huddinge, Department of Radiology, Stockholm (Sweden)
2016-11-15
To assess liver perfusion-CT (P-CT) parameters derived from peak-splenic (PSE) versus peak-renal enhancement (PRE) maximum slope-based modelling in different levels of portal-venous hypertension (PVH). Twenty-four patients (16 men; mean age 68 ± 10 years) who underwent dynamic P-CT for detection of hepatocellular carcinoma (HCC) were retrospectively divided into three groups: (1) without PVH (n = 8), (2) with PVH (n = 8), (3) with PVH and thrombosis (n = 8). Time to PSE and PRE and arterial liver perfusion (ALP), portal-venous liver perfusion (PLP) and hepatic perfusion-index (HPI) of the liver and HCC derived from PSE- versus PRE-based modelling were compared between the groups. Time to PSE was significantly longer in PVH groups 2 and 3 (P = 0.02), whereas PRE was similar in groups 1, 2 and 3 (P > 0.05). In group 1, liver and HCC perfusion parameters were similar for PSE- and PRE-based modelling (all P > 0.05), whereas significant differences were seen for PLP and HPI (liver only) in group 2 and ALP in group 3 (all P < 0.05). PSE is delayed in patients with PVH, resulting in a miscalculation of PSE-based P-CT parameters. Maximum slope-based P-CT might be improved by replacing PSE with PRE-modelling, whereas the difference between PSE and PRE might serve as a non-invasive biomarker of PVH. (orig.)
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Comparative study on maximum residue limits standards of pesticides in peanuts%花生农药最大残留限量标准比对研究
丁小霞; 李培武; 周海燕; 李娟; 白艺珍
2011-01-01
It is important to protect the health of consumers and standardize the agricultural products in trading market. One essential aspect is to develop and implement a standardized scientific and applicable maximum residue limits( MRL) pesticides. A comparative study of maximum residue limits standards of pesticides in peanuts was carried out among China,Codex Alimentarius Commission (CAC) , Unite States, Japan and European Union. Corre-sponding suggestion was put forward after analyzing the problems in maximum residue limit standards of pesticides in China.%制定和实施科学合理的农药最大残留限量标准是保护消费者健康和规范农产品国际贸易的重要手段.对我国、国际食品法典委员会、花生主产国美国以及我国花生主要出口目的国日本和欧盟的花生农药最大残留限量标准进行了系统比较,分析了我国花生农药最大残留限量标准存在的问题,提出了相应的建议.
Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas
2014-05-01
We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).
Robust maximum likelihood estimation for stochastic state space model with observation outliers
AlMutawa, J.
2016-08-01
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.
Bravo, J. L [Instituto de Geofisica, UNAM, Mexico, D.F. (Mexico); Nava, M. M [Instituto Mexicano del Petroleo, Mexico, D.F. (Mexico); Gay, C [Centro de Ciencias de la Atmosfera, UNAM, Mexico, D.F. (Mexico)
2001-07-01
We developed a procedure to forecast, with 2 or 3 hours, the daily maximum of surface ozone concentrations. It involves the adjustment of Autoregressive Integrated and Moving Average (ARIMA) models to daily ozone maximum concentrations at 10 monitoring atmospheric stations in Mexico City during one-year period. A one-day forecast is made and it is adjusted with the meteorological and solar radiation information acquired during the first 3 hours before the occurrence of the maximum value. The relative importance for forecasting of the history of the process and of meteorological conditions is evaluated. Finally an estimate of the daily probability of exceeding a given ozone level is made. [Spanish] Se aplica un procedimiento basado en la metodologia conocida como ARIMA, para predecir, con 2 o 3 horas de anticipacion, el valor maximo de la concentracion diaria de ozono. Esta basado en el calculo de autorregresiones y promedios moviles aplicados a los valores maximos de ozono superficial provenientes de 10 estaciones de monitoreo atmosferico en la Ciudad de Mexico y obtenidos durante un ano de muestreo. El pronostico para un dia se ajusta con la informacion meteorologica y de radiacion solar correspondiente a un periodo que antecede con al menos tres horas la ocurrencia esperada del valor maximo. Se compara la importancia relativa de la historia del proceso y de las condiciones meteorologicas previas para el pronostico. Finalmente se estima la probabilidad diaria de que un nivel normativo o preestablecido para contingencias de ozono sea rebasado.
A maximum likelihood estimation framework for delay logistic differential equation model
Mahmoud, Ahmed Adly; Dass, Sarat Chandra; Muthuvalu, Mohana S.
2016-11-01
This paper will introduce the maximum likelihood method of estimation for delay differential equation model governed by unknown delay and other parameters of interest followed by a numerical solver approach. As an example we consider the delayed logistic differential equation. A grid based estimation framework is proposed. Our methodology estimates correctly the delay parameter as well as the initial starting value of the dynamical system based on simulation data. The computations have been carried out with help of mathematical software: MATLAB® 8.0 R2012b.
Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG
2008-01-01
In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Meyer, Karin
2007-11-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).
Facility Location Using Maximum Covering Model: An Application In Retail Sector
Çiğdem Alabaş Uslu
2012-06-01
Full Text Available In this study, a store location problem has been addressed for service sector, and a real application problem for a leading firm in modern retail sector in Turkey has been solved by modeling with mathematical programming. Since imitating a store location in retail sector is hard and provides an important competitive edge, determining accurate store locations becomes a critical issue for the management. Application problem solved in this study is to choose appropriate areas for opening stores with different capacities and quantities in Umraniye, Istanbul. The problem has been converted to a maximum set covering model by considering after sales forecasts and also by taking into account several decision criteria foreseen by the management. Optimum solutions of the developed model for different scenarios has been obtained using a package program and presented to the firm management.
Model unspecific search in CMS. Model unspecific limits
Knutzen, Simon; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)
2016-07-01
The standard model of particle physics is increasingly challenged by recent discoveries and also by long known phenomena, representing a strong motivation to develop extensions of the standard model. The amount of theories describing possible extensions is large and steadily growing. In this presentation a new approach is introduced, verifying if a given theory beyond the standard model is consistent with data collected by the CMS detector without the need to perform a dedicated search. To achieve this, model unspecific limits on the number of additional events above the standard model expectation are calculated in every event class produced by the MUSiC algorithm. Furthermore, a tool is provided to translate these results into limits on the signal cross section of any theory. In addition to the general procedure, first results and examples are shown using the proton-proton collision data taken at a centre of mass energy of 8 TeV.
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
DONG Sheng; CHI Kun; ZHANG Qiyi; ZHANG Xiangdong
2012-01-01
Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Markov theory into a higher precision model.The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values,and thus gives forecast results involving two aspects of information.The procedure for forecasting annul maximum water levels with the GMM contains five main steps:1) establish the GM (1,1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2,and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy.The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin,China are utilized to calibrate and verify the proposed model according to the above steps.Every 25 years' data are regarded as a hydro-sequence.Eight groups of simulated results show reasonable agreement between the predicted values and the measured data.The GMM is also applied to the 10 other hydrological stations in the same estuary.The forecast results for all of the hydrological stations are good or acceptable.The feasibility and effectiveness of this new forecasting model have been proved in this paper.
Anderson, L. S.; Wickert, A. D.; Colgan, W. T.; Anderson, R. S.
2014-12-01
The Last Glacial Maximum (LGM) Yellowstone Ice Cap was the largest continuous ice body in the US Rocky Mountains. Terminal moraine ages derived from cosmogenic radionuclide dating (e.g., Licciardi and Pierce, 2008) constrain the timing of maximum Ice Cap extent. Importantly, the moraine ages vary by several thousand years around the Ice Cap; ages on the eastern outlet glaciers are significantly younger than their western counterparts. In order to interpret these observations within the context of LGM climate in North America, we perform two numerical glacier modeling experiments: 1) We model the initiation and growth of the Ice Cap to steady state; and 2) We estimate the range of LGM climate states which led to the formation of the Ice Cap. We use an efficient semi-implicit 2-D glacier model coupled to a fully implicit solution for flexural isostasy, allowing for transient links between climatic forcing, ice thickness, and earth surface deflection. Independent of parameter selection, the Ice Cap initiates in the Absaroka and Beartooth mountains and then advances across the Yellowstone plateau to the west. The Ice Cap advances to its maximum extent first to the older eastern moraines and last to the younger western and northwestern moraines. This suggests that the moraine ages may reflect the timescale required for the Ice Cap to advance across the high elevation Yellowstone plateau rather than the timing of local LGM climate. With no change in annual precipitation from the present, a mean summer temperature drop of 8-9° C is required to form the Ice Cap. Further parameter searches provide the full range of LGM paleoclimate states that led to the Yellowstone Ice Cap. Using our preferred parameter set, we find that the timescale for the growth of the complete Ice Cap is roughly 10,000 years. Isostatic subsidence helps explain the long timescale of Ice Cap growth. The Yellowstone Ice Cap caused a maximum surface deflection of 300 m (using a constant effective elastic
Dust in High Latitudes in the Community Earth System Model since the Last Glacial Maximum
Albani, S.; Mahowald, N. M.
2015-12-01
Earth System Models are one of the main tools in modern climate research, and they provide the means to produce future climate projections. Modeling experiments of past climates is one of the pillars of the Coupled Modelling Inter-comparison Project (CMIP) / Paleoclimate Modelling Inter-comparison Project (PMIP) general strategy, aimed at understanding the climate sensitivity to varying forcings. Physical models are useful tools for studying dust transport patterns, as they allow representing the full dust cycle from sources to sinks with an internally consistent approach. Combining information from paleodust records and climate models in coherent studies can be a fruitful approach from different points of view. Based on a new quality-controlled, size- and temporally-resolved data compilation, we used the Community Earth System Model to estimate the mass balance of and variability in the global dust cycle since the Last Glacial Maximum and throughout the Holocene. We analyze the variability of the reconstructed global dust cycle at different climate equilibrium conditions since the LGM until the pre-industrial climate, and compare with palodust records, focusing on the high latitudes, and discuss the uncertainties and the implications for dust and iron deposition to the oceans.
Exposure-Based Cat Modeling, Available data, Advantages, & Limitations
Michel, Gero; Hosoe, Taro; Schrah, Mike; Saito, Keiko
2010-05-01
This paper discusses the advantages and disadvantages of exposure data for cat-modeling and considers concepts of scale as well as the completeness of data and data scoring using field/model examples. Catastrophe modeling based on exposure data has been considered the panacea for insurance-related cat modeling since the late 1980's. Reasons for this include: • The ability to extend risk modeling to consider data beyond historical losses, • Usability across many relevant scales, • Flexibility in addressing complex structures and policy conditions, and • Ability to assess dependence of risk results on exposure-attributes and exposure-modifiers, such as lines of business, occupancy types, and mitigation features, at any given scale. In order to calculate related risk, monetary exposure is correlated to vulnerabilities that have been calibrated with historical results, plausibility concepts, and/or physical modeling. While exposure based modeling is widely adopted, we also need to be aware of its limitations which include: • Boundaries in our understanding of the distribution of exposure, • Spatial interdependence of exposure patterns and the time-dependence of exposure, • Incomplete availability of loss information to calibrate relevant exposure attributes/structure with related vulnerabilities and losses, • The scale-dependence of vulnerability, • Potential for missing or incomplete communication of assumptions made during model calibration, • Inefficiencies in the aggregation or disaggregation of vulnerabilities, and • Factors which can influence losses other than exposure, vulnerability, and hazard. Although we might assume that the higher the resolution the better, regional model calibration is often limited to lower than street level resolution with higher resolution being achieved by disaggregating results using topographic/roughness features with often loosely constrained and/or varying effects on losses. This suggests that higher accuracy
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Esra Saatci
2010-01-01
Full Text Available We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC and the widely used linear Resistance-Inductance-Capacitance (RIC models of the respiratory system by Maximum Likelihood Estimator (MLE. The measurement noise is assumed to be Generalized Gaussian Distributed (GGD, and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters
Aguglia, D
2014-01-01
This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...
Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy
Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett
The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.
Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene
J. C. Hargreaves
2013-03-01
Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
Le Brocq, A. M.; Bentley, M. J.; Hubbard, A.; Fogwill, C. J.; Sugden, D. E.; Whitehouse, P. L.
2011-09-01
The Weddell Sea Embayment (WSE) sector of the Antarctic ice sheet has been suggested as a potential source for a period of rapid sea-level rise - Meltwater Pulse 1a, a 20 m rise in ˜500 years. Previous modelling attempts have predicted an extensive grounding line advance in the WSE, to the continental shelf break, leading to a large equivalent sea-level contribution for the sector. A range of recent field evidence suggests that the ice sheet elevation change in the WSE at the Last Glacial Maximum (LGM) is less than previously thought. This paper describes and discusses an ice flow modelling derived reconstruction of the LGM ice sheet in the WSE, constrained by the recent field evidence. The ice flow model reconstructions suggest that an ice sheet consistent with the field evidence does not support grounding line advance to the continental shelf break. A range of modelled ice sheet surfaces are instead produced, with different grounding line locations derived from a novel grounding line advance scheme. The ice sheet reconstructions which best fit the field constraints lead to a range of equivalent eustatic sea-level estimates between approximately 1.4 and 3 m for this sector. This paper describes the modelling procedure in detail, considers the assumptions and limitations associated with the modelling approach, and how the uncertainty may impact on the eustatic sea-level equivalent results for the WSE.
A strong test of a maximum entropy model of trait-based community assembly.
Shipley, Bill; Laughlin, Daniel C; Sonnier, Grégory; Otfinowski, Rafael
2011-02-01
We evaluate the predictive power and generality of Shipley's maximum entropy (maxent) model of community assembly in the context of 96 quadrats over a 120-km2 area having a large (79) species pool and strong gradients. Quadrats were sampled in the herbaceous understory of ponderosa pine forests in the Coconino National Forest, Arizona, U.S.A. The maxent model accurately predicted species relative abundances when observed community-weighted mean trait values were used as model constraints. Although only 53% of the variation in observed relative abundances was associated with a combination of 12 environmental variables, the maxent model based only on the environmental variables provided highly significant predictive ability, accounting for 72% of the variation that was possible given these environmental variables. This predictive ability largely surpassed that of nonmetric multidimensional scaling (NMDS) or detrended correspondence analysis (DCA) ordinations. Using cross-validation with 1000 independent runs, the median correlation between observed and predicted relative abundances was 0.560 (the 2.5% and 97.5% quantiles were 0.045 and 0.825). The qualitative predictions of the model were also noteworthy: dominant species were correctly identified in 53% of the quadrats, 83% of rare species were correctly predicted to have a relative abundance of < 0.05, and the median predicted relative abundance of species actually absent from a quadrat was 5 x 10(-5).
Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.
2012-07-01
Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.
Dörr, Aaron; Mehdizadeh, Amirfarhang
2012-01-01
Based on the notion of a construction process consisting of the stepwise addition of particles to the pure fluid, a discrete model for the apparent viscosity as well as for the maximum packing fraction of polydisperse suspensions of spherical, non-colloidal particles is derived. The model connects the approaches by Bruggeman and Farris and is valid for large size ratios of consecutive particle classes during the construction process. Furthermore, a new general form of the well-known Krieger equation allowing for the choice of a second-order Taylor coefficient for the volume fraction is proposed and then applied as a monodisperse reference equation in the course of polydisperse modeling. By applying the polydisperse viscosity model to two different particle size distributions (Rosin-Rammler and uniform distribution), the influence of polydispersity on the apparent viscosity is examined. The extension of the model to the case of small size ratios as well as to the inclusion of shear rate effects is left for fut...
White, Ethan P; Thibault, Katherine M; Xiao, Xiao
2012-08-01
The species abundance distribution (SAD) is one of themost studied patterns in ecology due to its potential insights into commonness and rarity, community assembly, and patterns of biodiversity. It is well established that communities are composed of a few common and many rare species, and numerous theoretical models have been proposed to explain this pattern. However, no attempt has been made to determine how well these theoretical characterizations capture observed taxonomic and global-scale spatial variation in the general form of the distribution. Here, using data of a scope unprecedented in community ecology, we show that a simple maximum entropy model produces a truncated log-series distribution that can predict between 83% and 93% of the observed variation in the rank abundance of species across 15 848 globally distributed communities including birds, mammals, plants, and butterflies. This model requires knowledge of only the species richness and total abundance of the community to predict the full abundance distribution, which suggests that these factors are sufficient to understand the distribution for most purposes. Since geographic patterns in richness and abundance can often be successfully modeled, this approach should allow the distribution of commonness and rarity to be characterized, even in locations where empirical data are unavailable.
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2016-10-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
Che Wan Jasimah bt Wan Mohamed Radzi
2016-11-01
Full Text Available Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM with Maximum likelihood (ML and Bayesian predictors. This framework includes parental socioeconomic status, household food security, parental lifestyle, and children’s lifestyle. The sample for this study involves 452 volunteer Chinese families with children 7–12 years old. The experimental results are compared in terms of root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error metrics. An analysis of the proposed causal model suggests there are multiple significant interconnections among the variables of interest. According to both Bayesian and ML techniques, the proposed framework illustrates that parental socioeconomic status and parental lifestyle strongly impact children’s lifestyle. The impact of household food security on children’s lifestyle is rejected. However, there is a strong relationship between household food security and both parental socioeconomic status and parental lifestyle. Moreover, the outputs illustrate that the Bayesian prediction model has a good fit with the data, unlike the ML approach. The reasons for this discrepancy between ML and Bayesian prediction are debated and potential advantages and caveats with the application of the Bayesian approach in future studies are discussed.
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2017-09-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
Iden, Sascha; Peters, Andre; Durner, Wolfgang
2017-04-01
Soil hydraulic properties are required to solve the Richards equation, the most widely applied model for variably-saturated flow. While the experimental determination of the water retention curve does not pose significant challenges, the measurement of unsaturated hydraulic conductivity is time consuming and costly. The prediction of the unsaturated hydraulic conductivity curve from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. A well-known problem of conductivity prediction for retention functions with wide pore-size distributions is the sharp drop in conductivity close to water saturation. This problematic behavior is well known for the van Genuchten model if the shape parameter n assumes values smaller than about 1.3. So far, the workaround for this artefact has been to introduce an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable and thus a discontinuous water capacity function. We present an improved parametrization of the hydraulic properties which uses the original capillary saturation function and introduces a maximum pore radius only in the pore-bundle model. Closed-form equations for the hydraulic conductivity function were derived for the unimodal and multimodal retention functions of van Genuchten and have been tested by sensitivity analysis and applied in curve fitting and inverse modeling of multistep outflow experiments. The resulting hydraulic conductivity function is smooth, increases monotonically close to saturation, and eliminates the sharp drop in conductivity close to saturation. Furthermore, the new model retains the smoothness and continuous differentiability of the water retention curve. We conclude that the resulting soil hydraulic functions are physically more reasonable than the ones predicted by previous approaches, and are thus ideally suited for numerical simulations
Assimilating host model information into a limited area model
Nils Gustafsson
2012-01-01
Full Text Available We propose to add an extra source of information to the data-assimilation of the regional HIgh Resolution Limited Area Model (HIRLAM model, constraining larger scales to the host model providing the lateral boundary conditions. An extra term, Jk, measuring the distance to the large-scale vorticity of the host model, is added to the cost-function of the variational data-assimilation. Vorticity is chosen because it is a good representative of the large-scale flow and because vorticity is a basic control variable of the HIRLAM variational data-assimilation. Furthermore, by choosing only vorticity, the remaining model variables, divergence, temperature, surface pressure and specific humidity will be allowed to adapt to the modified vorticity field in accordance with the internal balance constraints of the regional model. The error characteristics of the Jk term are described by the horizontal spectral densities and the vertical eigenmodes (eigenvectors and eigenvalues of the host model vorticity forecast error fields, expressed in the regional model geometry. The vorticity field, provided by the European Centre for Medium-range Weather Forecasts (ECMWF operational model, was assimilated into the HIRLAM model during an experiment period of 33 d in winter with positive impact on forecast verification statistics for upper air variables and mean sea level pressure.The review process was handled by Editor-in-Cheif Harald Lejenäs
1979-01-01
A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.
Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes
Hong Chang HU; Lei SONG
2014-01-01
The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.
Mansoureh Haj Mohammad Hosseini
2011-04-01
Full Text Available In this paper, a bi-objective mathematical model for emergency services location-allocation problem on a tree network considering maximum distance constraint is presented. The first objective function called centdian is a weighted mean of a minisum and a minimax criterion and the second one is a maximal covering criterion. For the solution of the bi-objective optimization problem, the problem is split in two sub problems: the selection of the best set of locations, and a demand assignment problem to evaluate each selection of locations. We propose a heuristic algorithm to characterize the efficient location point set on the network. Finally, some numerical examples are presented to illustrate the effectiveness of the proposed algorithms.
Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning in Web 3.0
Padma, S
2012-01-01
Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the queries by the students. Completely Interactive learner's query determine the customization of the intelligent tutor. This paper analyses the Web 3.0 learning environment attributes. A Maximum spanning tree model for the personalized web based collaborative learning is designed.
Ashford, Oliver S; Davies, Andrew J.; Jones, Daniel O. B.
2014-01-01
Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of X...
Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.
Asti, Lorenzo; Uguzzoni, Guido; Marcatili, Paolo; Pagnani, Andrea
2016-04-01
The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6), outperforming other sequence- and structure-based models.
Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit
2012-01-01
One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.
Vidal-García, Francisca; Serio-Silva, Juan Carlos
2011-07-01
We developed a potential distribution model for the tropical rain forest species of primates of southern Mexico: the black howler monkey (Alouatta pigra), the mantled howler monkey (Alouatta palliata), and the spider monkey (Ateles geoffroyi). To do so, we applied the maximum entropy algorithm from the ecological niche modeling program MaxEnt. For each species, we used occurrence records from scientific collections, and published and unpublished sources, and we also used the 19 environmental coverage variables related to precipitation and temperature from WorldClim to develop the models. The predicted distribution of A. pigra was strongly associated with the mean temperature of the warmest quarter (23.6%), whereas the potential distributions of A. palliata and A. geoffroyi were strongly associated with precipitation during the coldest quarter (52.2 and 34.3% respectively). The potential distribution of A. geoffroyi is broader than that of the Alouatta spp. The areas with the greatest probability of presence of A. pigra and A. palliata are strongly associated with riparian vegetation, whereas the presence of A. geoffroyi is more strongly associated with the presence of rain forest. Our most significant contribution is the identification of areas with a high probability of the presence of these primate species, which is information that can be applied to planning future studies and then establishing criteria for the creation of areas to primate conservation in Mexico.
Computational design of hepatitis C vaccines using maximum entropy models and population dynamics
Hart, Gregory; Ferguson, Andrew
Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.
Mona Nazeri
Full Text Available One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.
Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.
Lorenzo Asti
2016-04-01
Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.
Melnikov, A. A.; Kostishin, V. G.; Alenkov, V. V.
2016-09-01
Real operating conditions of a thermoelectric cooling device are in the presence of thermal resistances between thermoelectric material and a heat medium or cooling object. They limit performance of a device and should be considered when modeling. Here we propose a dimensionless mathematical steady state model, which takes them into account. Analytical equations for dimensionless cooling capacity, voltage, and coefficient of performance (COP) depending on dimensionless current are given. For improved accuracy a device can be modeled with use of numerical or combined analytical-numerical methods. The results of modeling are in acceptable accordance with experimental results. The case of zero temperature difference between hot and cold heat mediums at which the maximum cooling capacity mode appears is considered in detail. Optimal device parameters for maximal cooling capacity, such as fraction of thermal conductance on the cold side y, fraction of current relative to maximal j' are estimated in range of 0.38-0.44 and 0.48-0.95, respectively, for dimensionless conductance K' = 5-100. Also, a method for determination of thermal resistances of a thermoelectric cooling system is proposed.
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Climate change uncertainty for daily minimum and maximum temperatures: a model inter-comparison
Lobell, D; Bonfils, C; Duffy, P
2006-11-09
Several impacts of climate change may depend more on changes in mean daily minimum (T{sub min}) or maximum (T{sub max}) temperatures than daily averages. To evaluate uncertainties in these variables, we compared projections of T{sub min} and T{sub max} changes by 2046-2065 for 12 climate models under an A2 emission scenario. Average modeled changes in T{sub max} were slightly lower in most locations than T{sub min}, consistent with historical trends exhibiting a reduction in diurnal temperature ranges. However, while average changes in T{sub min} and T{sub max} were similar, the inter-model variability of T{sub min} and T{sub max} projections exhibited substantial differences. For example, inter-model standard deviations of June-August T{sub max} changes were more than 50% greater than for T{sub min} throughout much of North America, Europe, and Asia. Model differences in cloud changes, which exert relatively greater influence on T{sub max} during summer and T{sub min} during winter, were identified as the main source of uncertainty disparities. These results highlight the importance of considering separately projections for T{sub max} and T{sub min} when assessing climate change impacts, even in cases where average projected changes are similar. In addition, impacts that are most sensitive to summertime T{sub min} or wintertime T{sub max} may be more predictable than suggested by analyses using only projections of daily average temperatures.
Extremely Correlated Limit of the Hubbard Model
Perepelitsky, Edward
2014-01-01
In this work, we describe the simplifications to the Extremely Correlated Fermi Liquid Theory (ECFL) \\cite{ECFL, Monster} which occur in the limit of infinite spatial dimensions. In particular, we show that the single-particle electron Green's function G(k) can be written in terms of two momentum-independent self-energies. Moreover, we elucidate the nature of the ECFL \\lambda expansion in the limit of infinite dimensions and carry out this expansion explicitly to O(\\lambda^2). Additionally, w...
Maximum Potential Score (MPS: An operating model for a successful customer-focused strategy.
Cabello González, José Manuel
2015-11-01
Full Text Available One of marketers’ chief objectives is to achieve customer loyalty, which is a key factor for profitable growth. Therefore, they need to develop a strategy that attracts and maintains customers, giving them adequate motives, both tangible (prices and promotions and intangible (personalized service and treatment, to satisfy a customer and make him loyal to the company. Finding a way to accurately measure satisfaction and customer loyalty is very important. With regard to typical Relationship Marketing measures, we can consider listening to customers, which can help to achieve a competitive sustainable advantage. Customer satisfaction surveys are essential tools for listening to customers. Short questionnaires have gained considerable acceptance among marketers as a means to achieve a customer satisfaction measure. Our research provides an indication of the benefits of a short questionnaire (one/three questions. We find that the number of questions survey is significantly related to the participation in the survey (Net Promoter Score or NPS. We also prove that a the three question survey is more likely to have more participants than a traditional survey (Maximum Potential Score or MPS . Our main goal is to analyse one method as a potential predictor of customer loyalty. Using surveys, we attempt to empirically establish the causal factors in determining the satisfaction of customers. This paper describes a maximum potential operating model that captures with a three questions survey, important elements for a successful customer-focused strategy. MPS may give us lower participation rates than NPS but important information that helps to convert unhappy customers or just satisfied customers, into loyal customers.
Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks
Paul H. Evangelista
2011-05-01
Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Haseeb A. Khan
2008-01-01
Full Text Available This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA, maximum parsimony (MP and unweighted pair group method with arithmetic mean (UPGMA. The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella and an out-group (Addax nasomaculatus were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65% followed by cyt-b (94.22% and d-loop (87.29%. There were few transitions (2.35% and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions and d-loop (11.57% transitions and 1.14% transversions while com- paring the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.
Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals
Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys
2016-04-01
The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.
Advantages and Limitations of Different SDLC Models
Radhika D Amlani
2012-11-01
Full Text Available Software engineering is the area which is constantly growing. It is very interesting subject to learn as all the software development industry based on this specified area. Now a days, there are lots of software development life cycle models available. According to the requirements, software industry people are using it as per their requirements. As there are lots of SDLC models, they are used according to their requirements. So, it is needed to know the requirements in which the SDLC models is used. This paper is about the pros and cons of some models. So, user can take the advantage of this paper to find the model best suitable for their need.
Analytical examples, measurement models, and classical limit of quantum backflow
Yearsley, J. M.; Halliwell, J. J.; Hartshorn, R.; Whitby, A.
2012-10-01
We investigate the backflow effect in elementary quantum mechanics—the phenomenon in which a state consisting entirely of positive momenta may have negative current and the probability flows in the opposite direction to the momentum. We compute the current and flux for states consisting of superpositions of Gaussian wave packets. These are experimentally realizable but the amount of backflow is small. Inspired by the numerical results of Penz [Penz, Grübl, Kreidl, and Wagner, J. Phys. AJPHAC50305-447010.1088/0305-4470/39/2/012 39, 423 (2006)], we find two nontrivial wave functions whose current at any time may be computed analytically and which have periods of significant backflow, in one case with a backward flux equal to about 70% of the maximum possible backflow, a dimensionless number cbm≈0.04, discovered by Bracken and Melloy [Bracken and Melloy, J. Phys. AJPHAC50305-447010.1088/0305-4470/27/6/040 27, 2197 (1994)]. This number has the unusual property of being independent of ℏ (and also of all other parameters of the model), despite corresponding to an obviously quantum-mechanical effect, and we shed some light on this surprising property by considering the classical limit of backflow. We discuss some specific measurement models in which backflow may be identified in certain measurable probabilities.
Analytic Models of Brown Dwarfs and The Substellar Mass Limit
Auddy, Sayantan; Valluri, S R
2016-01-01
We present the current status of the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main sequence stars. In the spirit of a simplified analytic theory we also introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal non-relativistic Fermi gas at a finite temperature, therefore allowing for non-zero values of the degeneracy parameter ($\\psi = \\frac{kT}{\\mu_{F}}$, where $\\mu_{F}$ is the Fermi energy). We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially-ionized hydrogen in the inner region. We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification we find the maximum mass for a brown dwarf to be in the range $0.064M_\\odot...
Animal models of osteoporosis - necessity and limitations
Turner A. Simon
2001-06-01
Full Text Available There is a great need to further characterise the available animal models for postmenopausal osteoporosis, for the understanding of the pathogenesis of the disease, investigation of new therapies (e.g. selective estrogen receptor modulators (SERMs and evaluation of prosthetic devices in osteoporotic bone. Animal models that have been used in the past include non-human primates, dogs, cats, rodents, rabbits, guinea pigs and minipigs, all of which have advantages and disadvantages. Sheep are a promising model for various reasons: they are docile, easy to handle and house, relatively inexpensive, available in large numbers, spontaneously ovulate, and the sheep's bones are large enough to evaluate orthopaedic implants. Most animal models have used females and osteoporosis in the male has been largely ignored. Recently, interest in development of appropriate prosthetic devices which would stimulate osseointegration into osteoporotic, appendicular, axial and mandibular bone has intensified. Augmentation of osteopenic lumbar vertebrae with bioactive ceramics (vertebroplasty is another area that will require testing in the appropriate animal model. Using experimental animal models for the study of these different facets of osteoporosis minimizes some of the difficulties associated with studying the disease in humans, namely time and behavioral variability among test subjects. New experimental drug therapies and orthopaedic implants can potentially be tested on large numbers of animals subjected to a level of experimental control impossible in human clinical research.
Quasineutral limit of a standard drift diffusion model for semiconductors
无
2002-01-01
The limit of vanishing Debye length (charge neutral limit ) in a nonlinear bipolar drift-diffusion model for semiconductors without pn-junction (i.e. without a bipolar background charge ) is studied. The quasineutral limit (zero-Debye-length limit) is performed rigorously by using the weak compactness argument and the so-called entropy functional which yields appropriate uniform estimates.
RELAXATION TIME LIMITS PROBLEM FOR HYDRODYNAMIC MODELS IN SEMICONDUCTOR SCIENCE
无
2007-01-01
In this article, two relaxation time limits, namely, the momentum relaxation time limit and the energy relaxation time limit are considered. By the compactness argument, it is obtained that the smooth solutions of the multidimensional nonisentropic Euler-Poisson problem converge to the solutions of an energy transport model or a drift diffusion model, respectively, with respect to different time scales.
Using maximum entropy model to predict protein secondary structure with single sequence.
Ding, Yong-Sheng; Zhang, Tong-Liang; Gu, Quan; Zhao, Pei-Ying; Chou, Kuo-Chen
2009-01-01
Prediction of protein secondary structure is somewhat reminiscent of the efforts by many previous investigators but yet still worthy of revisiting it owing to its importance in protein science. Several studies indicate that the knowledge of protein structural classes can provide useful information towards the determination of protein secondary structure. Particularly, the performance of prediction algorithms developed recently have been improved rapidly by incorporating homologous multiple sequences alignment information. Unfortunately, this kind of information is not available for a significant amount of proteins. In view of this, it is necessary to develop the method based on the query protein sequence alone, the so-called single-sequence method. Here, we propose a novel single-sequence approach which is featured by that various kinds of contextual information are taken into account, and that a maximum entropy model classifier is used as the prediction engine. As a demonstration, cross-validation tests have been performed by the new method on datasets containing proteins from different structural classes, and the results thus obtained are quite promising, indicating that the new method may become an useful tool in protein science or at least play a complementary role to the existing protein secondary structure prediction methods.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Vries, de R.Y.; Briels, W.J.; Feil, D.; Velde, te G.; Baerends, E.J.
1996-01-01
1990 Sakata and Sato applied the maximum entropy method (MEM) to a set of structure factors measured earlier by Saka and Kato with the Pendellösung method. They found the presence of non-nuclear attractors, i.e., maxima in the density between two bonded atoms. We applied the MEM to a limited set of
Silver, R.N.; Gubernatis, J.E.; Sivia, D.S. (Los Alamos National Lab., NM (USA)); Jarrell, M. (Ohio State Univ., Columbus, OH (USA). Dept. of Physics)
1990-01-01
In this article we describe the results of a new method for calculating the dynamical properties of the Anderson model. QMC generates data about the Matsubara Green's functions in imaginary time. To obtain dynamical properties, one must analytically continue these data to real time. This is an extremely ill-posed inverse problem similar to the inversion of a Laplace transform from incomplete and noisy data. Our method is a general one, applicable to the calculation of dynamical properties from a wide variety of quantum simulations. We use Bayesian methods of statistical inference to determine the dynamical properties based on both the QMC data and any prior information we may have such as sum rules, symmetry, high frequency limits, etc. This provides a natural means of combining perturbation theory and numerical simulations in order to understand dynamical many-body problems. Specifically we use the well-established maximum entropy (ME) method for image reconstruction. We obtain the spectral density and transport coefficients over the entire range of model parameters accessible by QMC, with data having much larger statistical error than required by other proposed analytic continuation methods.
Overcoming some limitations of imprecise reliability models
Kozine, Igor; Krymsky, Victor
2011-01-01
The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time...... to failure which is in reality a rather arbitrary value. The practical meaning of the models of this kind is brought to question. We suggest an approach that overcomes the issue of having to impose an upper bound on time to failure and makes the calculated lower and upper reliability measures more precise....... The main assumption consists in that failure rate is bounded. Langrage method is used to solve the non-linear program. Finally, an example is provided....
Maximum Likelihood Implementation of an Isolation-with-Migration Model for Three Species.
Dalquen, Daniel A; Zhu, Tianqi; Yang, Ziheng
2017-05-01
We develop a maximum likelihood (ML) method for estimating migration rates between species using genomic sequence data. A species tree is used to accommodate the phylogenetic relationships among three species, allowing for migration between the two sister species, while the third species is used as an out-group. A Markov chain characterization of the genealogical process of coalescence and migration is used to integrate out the migration histories at each locus analytically, whereas Gaussian quadrature is used to integrate over the coalescent times on each genealogical tree numerically. This is an extension of our early implementation of the symmetrical isolation-with-migration model for three species to accommodate arbitrary loci with two or three sequences per locus and to allow asymmetrical migration rates. Our implementation can accommodate tens of thousands of loci, making it feasible to analyze genome-scale data sets to test for gene flow. We calculate the posterior probabilities of gene trees at individual loci to identify genomic regions that are likely to have been transferred between species due to gene flow. We conduct a simulation study to examine the statistical properties of the likelihood ratio test for gene flow between the two in-group species and of the ML estimates of model parameters such as the migration rate. Inclusion of data from a third out-group species is found to increase dramatically the power of the test and the precision of parameter estimation. We compiled and analyzed several genomic data sets from the Drosophila fruit flies. Our analyses suggest no migration from D. melanogaster to D. simulans, and a significant amount of gene flow from D. simulans to D. melanogaster, at the rate of ~0.02 migrant individuals per generation. We discuss the utility of the multispecies coalescent model for species tree estimation, accounting for incomplete lineage sorting and migration. © The Author(s) 2016. Published by Oxford University Press, on
Maximum entropy modeling risk of anthrax in the Republic of Kazakhstan.
Abdrakhmanov, S K; Mukhanbetkaliyev, Y Y; Korennoy, F I; Sultanov, A A; Kadyrov, A S; Kushubaev, D B; Bakishev, T G
2017-09-01
The objective of this study was to zone the territory of the Republic of Kazakhstan (RK) into risk categories according to the probability of anthrax emergence in farm animals as stipulated by the re-activation of preserved natural foci. We used historical data on anthrax morbidity in farm animals during the period 1933 - 2014, collected by the veterinary service of the RK. The database covers the entire territory of the RK and contains 4058 anthrax outbreaks tied to 1798 unique locations. Considering the strongly pronounced natural focality of anthrax, we employed environmental niche modeling (Maxent) to reveal patterns in the outbreaks' linkages to specific combinations of environmental factors. The set of bioclimatic factors BIOCLIM, derived from remote sensing data, the altitude above sea level, the land cover type, the maximum green vegetation fraction (MGVF) and the soil type were examined as explanatory variables. The model demonstrated good predictive ability, while the MGVF, the bioclimatic variables reflecting precipitation level and humidity, and the soil type were found to contribute most significantly to the model. A continuous probability surface was obtained that reflects the suitability of the study area for the emergence of anthrax outbreaks. The surface was turned into a categorical risk map by averaging the probabilities within the administrative divisions at the 2nd level and putting them into four categories of risk, namely: low, medium, high and very high risk zones, where very high risk refers to more than 50% suitability to the disease re-emergence and low risk refers to less than 10% suitability. The map indicated increased risk of anthrax re-emergence in the districts along the northern, eastern and south-eastern borders of the country. It was recommended that the national veterinary service uses the risk map for the development of contra-epizootic measures aimed at the prevention of anthrax re-emergence in historically affected regions of
Limiting assumptions in molecular modeling: electrostatics.
Marshall, Garland R
2013-02-01
Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
Technical evaluation of a total maximum daily load model for Upper Klamath and Agency Lakes, Oregon
Wood, Tamara M.; Wherry, Susan A.; Carter, James L.; Kuwabara, James S.; Simon, Nancy S.; Rounds, Stewart A.
2013-01-01
We reviewed a mass balance model developed in 2001 that guided establishment of the phosphorus total maximum daily load (TMDL) for Upper Klamath and Agency Lakes, Oregon. The purpose of the review was to evaluate the strengths and weaknesses of the model and to determine whether improvements could be made using information derived from studies since the model was first developed. The new data have contributed to the understanding of processes in the lakes, particularly internal loading of phosphorus from sediment, and include measurements of diffusive fluxes of phosphorus from the bottom sediments, groundwater advection, desorption from iron oxides at high pH in a laboratory setting, and estimates of fluxes of phosphorus bound to iron and aluminum oxides. None of these processes in isolation, however, is large enough to account for the episodically high values of whole-lake internal loading calculated from a mass balance, which can range from 10 to 20 milligrams per square meter per day for short periods. The possible role of benthic invertebrates in lake sediments in the internal loading of phosphorus in the lake has become apparent since the development of the TMDL model. Benthic invertebrates can increase diffusive fluxes several-fold through bioturbation and biodiffusion, and, if the invertebrates are bottom feeders, they can recycle phosphorus to the water column through metabolic excretion. These organisms have high densities (1,822–62,178 individuals per square meter) in Upper Klamath Lake. Conversion of the mean density of tubificid worms (Oligochaeta) and chironomid midges (Diptera), two of the dominant taxa, to an areal flux rate based on laboratory measurements of metabolic excretion of two abundant species suggested that excretion by benthic invertebrates is at least as important as any of the other identified processes for internal loading to the water column. Data from sediment cores collected around Upper Klamath Lake since the development of the
Limitations of the biopsychosocial model in psychiatry
Benning TB
2015-05-01
Full Text Available Tony B Benning Maple Ridge Mental Health Centre, Maple Ridge, BC, Canada Abstract: A commitment to an integrative, non-reductionist clinical and theoretical perspective in medicine that honors the importance of all relevant domains of knowledge, not just “the biological,” is clearly evident in Engel’s original writings on the biopsychosocial model. And though this model’s influence on modern psychiatry (in clinical as well as educational settings has been significant, a growing body of recent literature is critical of it - charging it with lacking philosophical coherence, insensitivity to patients’ subjective experience, being unfaithful to the general systems theory that Engel claimed it be rooted in, and engendering an undisciplined eclecticism that provides no safeguards against either the dominance or the under-representation of any one of the three domains of bio, psycho, or social. Keywords: critique of biopsychosocial psychiatry, integrative psychiatry, George Engel
D. M. Roche
2006-11-01
Full Text Available The Last Glacial Maximum climate is one of the classic benchmarks used both to test the ability of coupled models to simulate climates different from that ot the present-day and to better understand the possible range of mechanisms that could be involved in future climate change. It also bears the advantage of being one of the most well documented periods with respect to palaeoclimatic records, allowing a thorough data-model comparison. We present here an ensemble of Last Glacial Maximum climate simulations obtained with the Earth System model LOVECLIM, including coupled dynamic atmosphere, ocean and vegetation components. The climate obtained using standard parameter values is then compared to available proxy data for the surface ocean, vegetation, oceanic circulation and atmospheric conditions. Interestingly, the oceanic circulation obtained resembles that of the present-day, but with increased overturning rates. As this result is in contradiction with the "classic" palaeoceanographic view, we ran a range of sensitivity experiments to explore the response of the model and the possibilities for other oceanic circulation states. After a critical review of our LGM state with respect to available proxy data, we conclude that the balance between water masses obtained is consistent with the available data although the specific characteristics (temperature, salinity are not in full agreement. The consistency of the simulated state is further reinforced by the fact that the mean surface climate obtained is shown to be generally in agreement with the most recent reconstructions of vegetation and sea surface temperatures, even at regional scales.
Hsia, Wei-Shen
1987-01-01
A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.
Rijgersberg, H.; Nierop Groot, M.N.; Tromp, S.O.; Franz, E.
2013-01-01
Within a microbial risk assessment framework, modeling the maximum population density (MPD) of a pathogenic microorganism is important but often not considered. This paper describes a model predicting the MPD of Salmonella on alfalfa as a function of the initial contamination level, the total count
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Haberman, Shelby J.
2004-01-01
The usefulness of joint and conditional maximum-likelihood is considered for the Rasch model under realistic testing conditions in which the number of examinees is very large and the number is items is relatively large. Conditions for consistency and asymptotic normality are explored, effects of model error are investigated, measures of prediction…
MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM
I. Elzein
2015-01-01
Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.
Limiting Shapes for Deterministic Centrally Seeded Growth Models
Fey-den Boer, Anne; Redig, Frank
2007-01-01
We study the rotor router model and two deterministic sandpile models. For the rotor router model in ℤ d , Levine and Peres proved that the limiting shape of the growth cluster is a sphere. For the other two models, only bounds in dimension 2 are known. A unified approach for these models with a
Limiting Shapes for Deterministic Centrally Seeded Growth Models
Fey-den Boer, Anne; Redig, Frank
2007-01-01
We study the rotor router model and two deterministic sandpile models. For the rotor router model in ℤ d , Levine and Peres proved that the limiting shape of the growth cluster is a sphere. For the other two models, only bounds in dimension 2 are known. A unified approach for these models with a
Fixed transaction costs and modelling limited dependent variables
Hempenius, A.L.
1994-01-01
As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.
MIN Htwe, Y. M.
2016-12-01
Myanmar has suffered many times from earthquake disasters and four times from tsunamis according to historical data. The purpose of this study is to estimate the tsunami arrival time and maximum tsunami wave amplitude for the Rakhine coast of Myanmar using the TUNAMI F1 model. In this study I calculate the tsunami arrival time and maximum tsunami wave amplitude based on a tsunamigenic earthquake source of moment magnitude 8.5 in the Arakan subduction zone off the west-coast of Myanmar, using the TUNAMI F1 model, selecting eight points on Rakhine coast. The model result indicates that the tsunami waves would first hit Kyaukpyu on the Rakhine coast about 0.05 minutes after the onset of a magnitude 8.5 earthquake, and the maximum tsunami wave amplitude would be 2.37 meters.
Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela
2017-04-01
Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.
The maximum optical depth toward bulge stars from axisymmetric models of the Milky Way
Kuijken, K
1997-01-01
It has been known that recent microlensing results toward the bulge imply mass densities that are surprisingly high, given dynamical constraints on the Milky Way mass distribution. We derive the maximum optical depth toward the bulge that may be generated by axisymmetric structures in the Milky Way,
An Actuarial Model for Assessment of Prison Violence Risk Among Maximum Security Inmates
Cunningham, Mark D.; Sorensen, Jon R.; Reidy, Thomas J.
2005-01-01
An experimental scale for the assessment of prison violence risk among maximum security inmates was developed from a logistic regression analysis involving inmates serving parole-eligible terms of varying length (n = 1,503), life-without-parole inmates (n = 960), and death-sentenced inmates who were mainstreamed into the general prison population…
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
An Actuarial Model for Assessment of Prison Violence Risk Among Maximum Security Inmates
Cunningham, Mark D.; Sorensen, Jon R.; Reidy, Thomas J.
2005-01-01
An experimental scale for the assessment of prison violence risk among maximum security inmates was developed from a logistic regression analysis involving inmates serving parole-eligible terms of varying length (n = 1,503), life-without-parole inmates (n = 960), and death-sentenced inmates who were mainstreamed into the general prison population…
Izsak, F.
2006-01-01
A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main dif��?culty involved in computing the likelihood function is the precise and fast determination of the multinomial coef��?cients. For this the coef��?cients are
The Semiclassical Limit in the Quantum Drift-Diffusion Model
Qiang Chang JU
2009-01-01
Semiclassical limit to the solution of isentropic quantum drift-diffusion model in semicon-ductor simulation is discussed. It is proved that the semiclassical limit of this solution satisfies the classical drift-diffusion model. In addition, we also proved the global existence of weak solutions.
SEMICLASSICAL LIMIT FOR BIPOLAR QUANTUM DRIFT-DIFFUSION MODEL
Ju Qiangchang; Chen Li
2009-01-01
Semiclassical limit to the solution of transient bipolar quantum drift-diffusion model in semiconductor simulation is discussed. It is proved that the semiclassical limit ofthis solution satisfies the classical bipolar drift-diffusion model. In addition, the authors also prove the existence of weak solution.
陈富坚; 黄世斌; 包惠明
2011-01-01
To solve the problems in the current deterministic method for determining a maximum speed limit for expressway operation against disastrous events, a reliability method was presented. The dynamic analysis was made for vehicle traveling at a horizontal curve of expressway, and respective maximum allowable speeds were deduced for vehicle in horizontal circular motion without sliding and that in emergency stopping without hitting an obstacle in the visual range. Based on Reliability engineering, the reliability of a maximum speed limit was defined. With safety of horizontal circular motion and emergency stopping as constraints, the performance function of the maximum speed limit was established and the model for calculation of its reliability and reliable indicator were deduced. For solution of the reliability model, Monte Carlo method was recommended due to multi-parameter high complexity of the non-linear performance function. With a self-developed program, a case study was conducted to illustrate the reliability analysis of the maximum speed limit for expressway safety management under a detrimental event. The reliability method for determining a maximum speed limit of expressway operation is helpful for improving traffic safety.%针对灾变事件下高速公路安全管理中采用定值型限速标准存在的问题,对基于可靠性的限速标准进行了探讨.通过对高速公路平曲线路段车辆行驶的动力学分析,推导了车辆作圆周运动而不发生横向滑移的最大允许车速,以及司机在弯道内发现障碍物而紧急安全停车的最大允许速度.以可靠性工程理论为依据,对高速公路限速标准的可靠度进行了定义,并以圆周运动安全和紧急刹车安全为约束条件建立了高速公路限速标准的功能函数,推导了相应的可靠性计算模型.针对限速标准功能函数的多参数复杂非线性特征,提出采用Monte Carlo法对限速标准可靠性计算模型进行求解.以所
Brigham-Grette, J.; Gualtieri, L.M.; Glushkova, O.Y.; Hamilton, T.D.; Mostoller, D.; Kotov, A.
2003-01-01
The Pekulney Mountains and adjacent Tanyurer River valley are key regions for examining the nature of glaciation across much of northeast Russia. Twelve new cosmogenic isotope ages and 14 new radiocarbon ages in concert with morphometric analyses and terrace stratigraphy constrain the timing of glaciation in this region of central Chukotka. The Sartan Glaciation (Last Glacial Maximum) was limited in extent in the Pekulney Mountains and dates to ???20,000 yr ago. Cosmogenic isotope ages > 30,000 yr as well as non-finite radiocarbon ages imply an estimated age no younger than the Zyryan Glaciation (early Wisconsinan) for large sets of moraines found in the central Tanyurer Valley. Slope angles on these loess-mantled ridges are less than a few degrees and crest widths are an order of magnitude greater than those found on the younger Sartan moraines. The most extensive moraines in the lower Tanyurer Valley are most subdued implying an even older, probable middle Pleistocene age. This research provides direct field evidence against Grosswald's Beringian ice-sheet hypothesis. ?? 2003 Elsevier Science (USA). All rights reserved.
Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.
1984-10-19
A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.
The Elastic Continuum Limit of the Tight Binding Model
Weinan E; Jianfeng LU
2007-01-01
The authors consider the simplest quantum mechanics model of solids, the tight binding model, and prove that in the continuum limit, the energy of tight binding model converges to that of the continuum elasticity model obtained using Cauchy-Born rule. Thet echnique in this paper is based mainly on spectral perturbation theory for large matrices.
Nguyen, Truong-Huy; El Outayek, Sarah; Lim, Sun Hee; Nguyen, Van-Thanh-Van
2017-10-01
Many probability distributions have been developed to model the annual maximum rainfall series (AMS). However, there is no general agreement as to which distribution should be used due to the lack of a suitable evaluation method. This paper presents hence a general procedure for assessing systematically the performance of ten commonly used probability distributions in rainfall frequency analyses based on their descriptive as well as predictive abilities. This assessment procedure relies on an extensive set of graphical and numerical performance criteria to identify the most suitable models that could provide the most accurate and most robust extreme rainfall estimates. The proposed systematic assessment approach has been shown to be more efficient and more robust than the traditional model selection method based on only limited goodness-of-fit criteria. To test the feasibility of the proposed procedure, an illustrative application was carried out using 5-min, 1-h, and 24-h annual maximum rainfall data from a network of 21 raingages located in the Ontario region in Canada. Results have indicated that the GEV, GNO, and PE3 models were the best models for describing the distribution of daily and sub-daily annual maximum rainfalls in this region. The GEV distribution, however, was preferred to the GNO and PE3 because it was based on a more solid theoretical basis for representing the distribution of extreme random variables.
Madsen, Henrik; Pearson, Charles P.; Rosbjerg, Dan
1997-04-01
Two regional estimation schemes, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto (GP) distribution for modeling threshold exceedances corresponding to a generalized extreme value (GEV) distribution for annual maxima. First, the accuracy of PDS/GP and AMS/GEV regional index-flood T-year event estimators are compared using Monte Carlo simulations. For estimation in typical regions assuming a realistic degree of heterogeneity, the PDS/GP index-flood model is more efficient. The regional PDS and AMS procedures are subsequently applied to flood records from 48 catchments in New Zealand. To identify homogeneous groupings of catchments, a split-sample regionalization approach based on catchment characteristics is adopted. The defined groups are more homogeneous for PDS data than for AMS data; a two-way grouping based on annual average rainfall is sufficient to attain homogeneity for PDS, whereas a further partitioning is necessary for AMS. In determination of the regional parent distribution using L- moment ratio diagrams, PDS data, in contrast to AMS data, provide an unambiguous interpretation, supporting a GP distribution.
Resende Rosangela Maria Simeão; Jank Liana; Valle Cacilda Borges do; Bonato Ana Lídia Variani
2004-01-01
The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten ...
J C Joshi; Tankeshwar Kumar; Sunita Srivastava; Divya Sachdeva
2017-02-01
Maximum and minimum temperatures are used in avalanche forecasting models for snow avalanche hazard mitigation over Himalaya. The present work is a part of development of Hidden Markov Model (HMM) based avalanche forecasting system for Pir-Panjal and Great Himalayan mountain ranges of the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum temperatures for Kanzalwan in Pir-Panjal range and Drass in Great Himalayan range with a lead time of two days. The HMMs have been developed using meteorological variables collected from these stations during the past 20 winters from 1992 to 2012. The meteorological variables have been used to define observations and states of the models and to compute model parameters (initial state, state transition and observation probabilities). The model parameters have been used in the Forward and the Viterbi algorithms to generate temperature forecasts. To improve the model forecasts, the model parameters have been optimised using Baum–Welch algorithm. The models have been compared with persistence forecast by root mean square errors (RMSE) analysis using independent data of two winters (2012–13, 2013–14). The HMM for maximum temperature has shown a 4–12% and 17–19% improvement in the forecast over persistence forecast, for day-1 and day-2, respectively. For minimum temperature, it has shown 6–38% and 5–12% improvement for day-1 and day-2, respectively.
Habitat modelling limitations - Puck Bay, Baltic Sea - a case study
Jan Marcin Węsławski
2013-02-01
Full Text Available The Natura 2000 sites and the Coastal Landscape Park in a shallow marine bay in the southern Baltic have been studied in detail for the distribution of benthic macroorganisms, species assemblages and seabed habitats. The relatively small Inner Puck Bay (104.8 km2 is one of the most thoroughly investigated marine areas in the Baltic: research has been carried out there continuously for over 50 years. Six physical parameters regarded as critically important for the marine benthos (depth, minimal temperature, maximum salinity, light, wave intensity and sediment type were summarized on a GIS map showing unified patches of seabed and the near-bottom water conditions. The occurrence of uniform seabed forms is weakly correlated with the distributions of individual species or multi-species assemblages. This is partly explained by the characteristics of the local macrofauna, which is dominated by highly tolerant, eurytopic species with opportunistic strategies. The history and timing of the assemblage formation also explains this weak correlation. The distribution of assemblages formed by long-living, structural species (Zostera marina and other higher plants shows the history of recovery following earlier disturbances. In the study area, these communities are still in the stage of recovery and recolonization, and their present distribution does not as yet match the distribution of the physical environmental conditions favourable to them. Our results show up the limitations of distribution modelling in coastal waters, where the history of anthropogenic disturbances can distort the picture of the present-day environmental control of biota distributions.
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Kelderman, Henk
1992-01-01
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual cou
Bergboer, N.H; Verdult, V.; Verhaegen, M.H.G.
2002-01-01
We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting parame
Jamil, T.; Braak, ter C.J.F.
2012-01-01
Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine l
Model-experiment interaction to improve representation of phosphorus limitation in land models
Norby, R. J.; Yang, X.; Cabugao, K. G. M.; Childs, J.; Gu, L.; Haworth, I.; Mayes, M. A.; Porter, W. S.; Walker, A. P.; Weston, D. J.; Wright, S. J.
2015-12-01
Carbon-nutrient interactions play important roles in regulating terrestrial carbon cycle responses to atmospheric and climatic change. None of the CMIP5 models has included routines to represent the phosphorus (P) cycle, although P is commonly considered to be the most limiting nutrient in highly productive, lowland tropical forests. Model simulations with the Community Land Model (CLM-CNP) show that inclusion of P coupling leads to a smaller CO2 fertilization effect and warming-induced CO2 release from tropical ecosystems, but there are important uncertainties in the P model, and improvements are limited by a dearth of data. Sensitivity analysis identifies the relative importance of P cycle parameters in determining P availability and P limitation, and thereby helps to define the critical measurements to make in field campaigns and manipulative experiments. To improve estimates of P supply, parameters that describe maximum amount of labile P in soil and sorption-desorption processes are necessary for modeling the amount of P available for plant uptake. Biochemical mineralization is poorly constrained in the model and will be improved through field observations that link root traits to mycorrhizal activity, phosphatase activity, and root depth distribution. Model representation of P demand by vegetation, which currently is set by fixed stoichiometry and allometric constants, requires a different set of data. Accurate carbon cycle modeling requires accurate parameterization of the photosynthetic machinery: Vc,max and Jmax. Relationships between the photosynthesis parameters and foliar nutrient (N and P) content are being developed, and by including analysis of covariation with other plant traits (e.g., specific leaf area, wood density), we can provide a basis for more dynamic, trait-enabled modeling. With this strong guidance from model sensitivity and uncertainty analysis, field studies are underway in Puerto Rico and Panama to collect model-relevant data on P
Beretta, Gian Paolo
2014-10-01
By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium
Contraction limits of the proton-neutron symplectic model
Ganev, H. G.
2016-01-01
The algebraic approach to nuclear structure physics allows a certain microscopic collective motion algebra to be also interpreted on macroscopic level which is achieved in the limit of large representation quantum numbers. Such limits are referred to as macroscopic or hydrodynamic limits and show how a given microscopic discrete system starts to behave like a continuous fluid. In the present paper, two contraction limits of the recently introduced fully microscopic proton-neutron symplectic model (PNSM) with the Sp(12; R) dynamical symmetry algebra are considered. As a result, two simplified macroscopic models of nuclear collective motion are obtained in simple geometrical terms. The first one is the U(6)-phonon model with the semi-direct product structure [HW(21)]U(6), which is shown to be actually an alternative formulation of the original proton-neutron symplectic model in the familiar IBM-terms. The second model which appears in double contraction limit is the two-rotor model with the ROTp(3) ⊗ ROTn(3) ⊃ ROT(3) algebraic structure. The latter, in contrast to the original two-rotor model, is not restricted to the case of two coupled axial rotors. In this way, the second contraction limit of the PNSM, provides the phenomenological two-rotor model with a simple microscopic foundation.
Contraction limits of the proton-neutron symplectic model
Ganev H. G.
2016-01-01
Full Text Available The algebraic approach to nuclear structure physics allows a certain microscopic collective motion algebra to be also interpreted on macroscopic level which is achieved in the limit of large representation quantum numbers. Such limits are referred to as macroscopic or hydrodynamic limits and show how a given microscopic discrete system starts to behave like a continuous fluid. In the present paper, two contraction limits of the recently introduced fully microscopic proton-neutron symplectic model (PNSM with the Sp(12; R dynamical symmetry algebra are considered. As a result, two simplified macroscopic models of nuclear collective motion are obtained in simple geometrical terms. The first one is the U(6-phonon model with the semi-direct product structure [HW(21]U(6, which is shown to be actually an alternative formulation of the original proton-neutron symplectic model in the familiar IBM-terms. The second model which appears in double contraction limit is the two-rotor model with the ROTp(3 ⊗ ROTn(3 ⊃ ROT(3 algebraic structure. The latter, in contrast to the original two-rotor model, is not restricted to the case of two coupled axial rotors. In this way, the second contraction limit of the PNSM, provides the phenomenological two-rotor model with a simple microscopic foundation.
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
Resende Rosangela Maria Simeão
2004-01-01
Full Text Available The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten plants per plot. Green matter yield, total and leaf dry matter yields and leaf percentage were evaluated in five cuts per year during three years. Genetic parameters were estimated and breeding and genotypic values were predicted using the restricted maximum likelihood/best linear unbiased prediction procedure (REML/BLUP. The dominant genetic variance was estimated by adjusting the effect of full-sib families. Low magnitude individual narrow sense heritabilities (0.02-0.05, individual broad sense heritabilities (0.14-0.20 and repeatability measured on an individual basis (0.15-0.21 were obtained. Dominance effects for all evaluated characteristics indicated that breeding strategies that explore heterosis must be adopted. Less than 5% increase in the parameter repeatability was obtained for a three-year evaluation period and may be the criterion to determine the maximum number of years of evaluation to be adopted, without compromising gain per cycle of selection. The identification of hybrid candidates for future cultivars and of those that can be incorporated into the breeding program was based on the genotypic and breeding values, respectively. The prediction of the performance of the hybrid progeny, based on the breeding values of the progenitors, permitted the identification of the best crosses and indicated the best parents to use in crosses.
Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm
Jansen, R.C.
A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical
Modelling of density limit phenomena in toroidal helical plasmas
Itoh, K. [National Inst. for Fusion Science, Toki, Gifu (Japan); Itoh, S.-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Giannone, L. [Max Planck Institut fuer Plasmaphysik, EURATOM-IPP Association, Garching (Germany)
2000-03-01
The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the W7-AS stellarator. (author)
Modelling of density limit phenomena in toroidal helical plasmas
Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan); Itoh, Sanae-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Giannone, Louis [EURATOM-IPP Association, Max Planck Institut fuer Plasmaphysik, Garching (Germany)
2001-11-01
The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the Wendelstein 7-AS (W7-AS) stellarator. (author)
Kozlowski, Dawid; Worthington, Dave
2015-01-01
Many public healthcare systems struggle with excessive waiting lists for elective patient treatment. Different countries address this problem in different ways, and one interesting method entails a maximum waiting time guarantee. Introduced in Denmark in 2002, it entitles patients to treatment at...... by hospital planners and strategic decision makers....... at a private hospital in Denmark or at a hospital abroad if the public healthcare system is unable to provide treatment within the stated maximum waiting time guarantee. Although clearly very attractive in some respects, many stakeholders have been very concerned about the negative consequences of the policy...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...
Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.
2014-12-01
Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of
Solving the Maximum Weighted Clique Problem Based on Parallel Biological Computing Model
Zhaocai Wang
2015-01-01
Full Text Available The maximum weighted clique (MWC problem, as a typical NP-complete problem, is difficult to be solved by the electronic computer algorithm. The aim of the problem is to seek a vertex clique with maximal weight sum in a given undirected graph. It is an extremely important problem in the field of optimal engineering scheme and control with numerous practical applications. From the point of view of practice, we give a parallel biological algorithm to solve the MWC problem. For the maximum weighted clique problem with m edges and n vertices, we use fixed length DNA strands to represent different vertices and edges, fully conduct biochemical reaction, and find the solution to the MVC problem in certain length range with O(n2 time complexity, comparing to the exponential time level by previous computer algorithms. We expand the applied scope of parallel biological computation and reduce computational complexity of practical engineering problems. Meanwhile, we provide a meaningful reference for solving other complex problems.
Variable selection for modeling the absolute magnitude at maximum of Type Ia supernovae
Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi
2015-06-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates for the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernovae: (i) The absolute magnitude at maximum depends on the color and light-curve width. (ii) The light-curve width depends on the strength of Si II. Recent studies have suggested adding more variables in order to explain the absolute magnitude. However, our analysis does not support adding any other variables in order to have a better generalization error.
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
The infinite volume limit of Ford's alpha model
Stefansson, Sigurdur Orn
2009-01-01
We prove the existence of a limit of the finite volume probability measures generated by tree growth rules in Ford's alpha model of phylogenetic trees. The limiting measure is shown to be concentrated on the set of trees consisting of exactly one infinite spine with finite, identically and independently distributed outgrowths.
Xuan Guo
2016-01-01
Full Text Available The theoretical formula of the maximum internal forces for circular tunnel lining structure under impact loads of the underground is deduced in this paper. The internal force calculation formula under different equivalent forms of impact pseudostatic loads is obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular formula under different equivalent forms of impact pseudostatic loads are obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular tunnel, it is found that the proposed theoretical results accord with the experimental values well. The corresponding equivalent impact pseudostatic triangular load is the most realistic pattern of all test equivalent forms. The equivalent impact pseudostatic load model and maximum solution of the internal force for tunnel lining structure are partially verified.
Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J
2017-08-16
The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.
A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation
Louis de Grange
2014-06-01
Full Text Available In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks.
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
Mehrotra, Shakti; Prakash, O; Khan, Feroz; Kukreja, A K
2013-02-01
KEY MESSAGE : ANN-based combinatorial model is proposed and its efficiency is assessed for the prediction of optimal culture conditions to achieve maximum productivity in a bioprocess in terms of high biomass. A neural network approach is utilized in combination with Hidden Markov concept to assess the optimal values of different environmental factors that result in maximum biomass productivity of cultured tissues after definite culture duration. Five hidden Markov models (HMMs) were derived for five test culture conditions, i.e. pH of liquid growth medium, volume of medium per culture vessel, sucrose concentration (%w/v) in growth medium, nitrate concentration (g/l) in the medium and finally the density of initial inoculum (g fresh weight) per culture vessel and their corresponding fresh weight biomass. The artificial neural network (ANN) model was represented as the function of these five Markov models, and the overall simulation of fresh weight biomass was done with this combinatorial ANN-HMM. The empirical results of Rauwolfia serpentina hairy roots were taken as model and compared with simulated results obtained from pure ANN and ANN-HMMs. The stochastic testing and Cronbach's α-value of pure and combinatorial model revealed more internal consistency and skewed character (0.4635) in histogram of ANN-HMM compared to pure ANN (0.3804). The simulated results for optimal conditions of maximum fresh weight production obtained from ANN-HMM and ANN model closely resemble the experimentally optimized culture conditions based on which highest fresh weight was obtained. However, only 2.99 % deviation from the experimental values could be observed in the values obtained from combinatorial model when compared to the pure ANN model (5.44 %). This comparison showed 45 % better potential of combinatorial model for the prediction of optimal culture conditions for the best growth of hairy root cultures.
Beltrán, M C; Romero, T; Althaus, R L; Molina, M P
2013-05-01
The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects β-lactam or tetracycline drugs in raw commingled cow milk at or below European Union maximum residue levels (EU-MRL). The Charm MRL BLTET test procedure was recently modified (dilution in buffer and longer incubation) by the manufacturers to be used with raw ewe and goat milk. To assess the Charm MRL BLTET test for the detection of β-lactams and tetracyclines in milk of small ruminants, an evaluation study was performed at Instituto de Ciencia y Tecnologia Animal of Universitat Politècnica de València (Spain). The test specificity and detection capability (CCβ) were studied following Commission Decision 2002/657/EC. Specificity results obtained in this study were optimal for individual milk free of antimicrobials from ewes (99.2% for β-lactams and 100% for tetracyclines) and goats (97.9% for β-lactams and 100% for tetracyclines) along the entire lactation period regardless of whether the results were visually or instrumentally interpreted. Moreover, no positive results were obtained when a relatively high concentration of different substances belonging to antimicrobial families other than β-lactams and tetracyclines were present in ewe and goat milk. For both types of milk, the CCβ calculated was lower or equal to EU-MRL for amoxicillin (4 µg/kg), ampicillin (4 µg/kg), benzylpenicillin (≤ 2 µg/kg), dicloxacillin (30 µg/kg), oxacillin (30 µg/kg), cefacetrile (≤ 63 µg/kg), cefalonium (≤ 10 µg/kg), cefapirin (≤ 30 µg/kg), desacetylcefapirin (≤ 30 µg/kg), cefazolin (≤ 25 µg/kg), cefoperazone (≤ 25 µg/kg), cefquinome (20 µg/kg), ceftiofur (≤ 50 µg/kg), desfuroylceftiofur (≤ 50µg/kg), and cephalexin (≤ 50 µg/kg). However, this test could neither detect cloxacillin nor nafcillin at or below EU-MRL (CCβ >30 µg/kg). The
Generalized linear models for categorical and continuous limited dependent variables
Smithson, Michael
2013-01-01
Introduction and OverviewThe Nature of Limited Dependent VariablesOverview of GLMsEstimation Methods and Model EvaluationOrganization of This BookDiscrete VariablesBinary VariablesLogistic RegressionThe Binomial GLMEstimation Methods and IssuesAnalyses in R and StataExercisesNominal Polytomous VariablesMultinomial Logit ModelConditional Logit and Choice ModelsMultinomial Processing Tree ModelsEstimation Methods and Model EvaluationAnalyses in R and StataExercisesOrdinal Categorical VariablesModeling Ordinal Variables: Common Practice versus Best PracticeOrdinal Model AlternativesCumulative Mod
A unified model of density limit in fusion plasmas
Zanca, P; Escande, D F; Pucella, G; Tudisco, O
2016-01-01
A limit for the edge density, ruled by radiation losses from light impurities, is established by a minimal cylindrical magneto-thermal equilibrium model. For ohmic tokamak and reversed field pinch the limit scales linearly with the plasma current, as the empirical Greenwald limit. The auxiliary heating adds a further dependence, scaling with the 0.4 power, in agreement with L-mode tokamak experiments. For a purely externally heated configuration the limit takes on a Sudo-like form, depending mainly on the input power, and is compatible with recent Stellarator scalings.
Xydas, N.; Kao, I.
1999-09-01
A new theory in contact mechanics for modeling of soft fingers is proposed to define the relationship between the normal force and the radius of contact for soft fingers by considering general soft-finger materials, including linearly and nonlinearly elastic materials. The results show that the radius of contact is proportional to the normal force raised to the power of {gamma}, which ranges from 0 to 1/3. This new theory subsumes the Hertzian contact model for linear elastic materials, where {gamma} = 1/3. Experiments are conducted to validate the theory using artificial soft fingers made of various materials such as rubber and silicone. Results for human fingers are also compared. This theory provides a basis for numerically constructing friction limit surfaces. The numerical friction limit surface can be approximated by an ellipse, with the major and minor axes as the maximum friction force and the maximum moment with respect to the normal axis of contact, respectively. Combining the results of the contact-mechanics model with the contact-pressure distribution, the normalized friction limit surface can be derived for anthropomorphic soft fingers. The results of the contact-mechanics model and the pressure distribution for soft fingers facilitate the construction of numerical friction limit surfaces, and will enable us to analyze and simulate contact behaviors of grasping and manipulation in robotics.
Decentralized Disturbance Accommodation with Limited Plant Model Information
Farokhi, F; Johansson, K H
2011-01-01
The design of optimal disturbance accommodation and servomechanism controllers with limited plant model information is considered in this paper. Their closed-loop performance are compared using a performance metric called competitive ratio which is the worst-case ratio of the cost of a given control design strategy to the cost of the optimal control design with full model information. It was recently shown that when it comes to designing optimal centralized or partially structured decentralized state-feedback controllers with limited model information, the best control design strategy in terms of competitive ratio is a static one. This is true even though the optimal structured decentralized state-feedback controller with full model information is dynamic. In this paper, we show that, in contrast, the best limited model information control design strategy for the disturbance accommodation problem gives a dynamic controller. We find an explicit minimizer of the competitive ratio and we show that it is undomina...
Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts
Zhaowei Qu
2014-01-01
Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.
Hubbard, L; Ziemer, B; Sadeghi, B; Javan, H; Lipinski, J; Molloi, S [University of California, Irvine, CA (United States)
2015-06-15
Purpose: To evaluate the accuracy of dynamic CT myocardial perfusion measurement using first pass analysis (FPA) and maximum slope models. Methods: A swine animal model was prepared by percutaneous advancement of an angioplasty balloon into the proximal left anterior descending (LAD) coronary artery to induce varying degrees of stenosis. Maximal hyperaemia was achieved in the LAD with an intracoronary adenosine drip (240 µg/min). Serial microsphere and contrast (370 mg/mL iodine, 30 mL, 5mL/s) injections were made over a range of induced stenoses, and dynamic imaging was performed using a 320-row CT scanner at 100 kVp and 200 mA. The FPA CT perfusion technique was used to make vessel-specific myocardial perfusion measurements. CT perfusion measurements using the FPA and maximum slope models were validated using colored microspheres as the reference gold standard. Results: Perfusion measurements using the FPA technique (P-FPA) showed good correlation with minimal offset when compared to perfusion measurements using microspheres (P- Micro) as the reference standard (P -FPA = 0.96 P-Micro + 0.05, R{sup 2} = 0.97, RMSE = 0.19 mL/min/g). In contrast, the maximum slope model technique (P-MS) was shown to underestimate perfusion when compared to microsphere perfusion measurements (P-MS = 0.42 P -Micro −0.48, R{sup 2} = 0.94, RMSE = 3.3 mL/min/g). Conclusion: The results indicate the potential for significant improvements in accuracy of dynamic CT myocardial perfusion measurement using the first pass analysis technique as compared with the standard maximum slope model.
Kumar, Sanjay
2016-06-01
Present paper inspects the prediction capability of the latest version of the International Reference Ionosphere (IRI-2012) model in predicting the total electron content (TEC) over seven different equatorial regions across the globe during a very low solar activity phase 2009 and a high solar activity phase 2012. This has been carried out by comparing the ground-based Global Positioning System (GPS)-derived VTEC with those from the IRI-2012 model. The observed GPS-TEC shows the presence of winter anomaly which is prominent during the solar maximum year 2012 and disappeared during solar minimum year 2009. The monthly and seasonal mean of the IRI-2012 model TEC with IRI-NeQ topside has been compared with the GPS-TEC, and our results showed that the monthly and seasonal mean value of the IRI-2012 model overestimates the observed GPS-TEC at all the equatorial stations. The discrepancy (or over estimation) in the IRI-2012 model is found larger during solar maximum year 2012 than that during solar minimum year 2009. This is a contradiction to the results recently presented by Tariku (2015) over equatorial regions of Uganda. The discrepancy is found maximum during the December solstice and a minimum during the March equinox. The magnitude of discrepancy in the IRI-2012 model showed longitudinal dependent which maximized in western longitude sector during both the years 2009 and 2012. The significant discrepancy in the IRI-2012 model observed during the solar minimum year 2009 could be attributed to larger difference between F10.7 flux and EUV flux (26-34 nm) during low solar activity period 2007-2009 than that during high solar activity period 2010-2012. This suggests that to represent the solar activity impact in the IRI model, implementation of new solar activity indices is further required for its better performance.
Klein, Daniel; Zezula, Ivan
The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design
Padma, S
2012-01-01
Web 3.0 is an evolving extension of the web 2.0 scenario. The perceptions regarding web 3.0 is different from person to person . Web 3.0 Architecture supports ubiquitous connectivity, network computing, open identity, intelligent web, distributed databases and intelligent applications. Some of the technologies which lead to the design and development of web 3.0 applications are Artificial intelligence, Automated reasoning, Cognitive architecture, Semantic web . An attempt is made to capture the requirements of Students inline with web 3.0 so as to bridge the gap between the design and development of web 3.0 applications and requirements among Students. Maximum Spanning Tree modeling of the requirements facilitate the identification of key areas and key attributes in the design and development of software products for Students in Web 3.0 using Discriminant analysis. Keywords : Web 3.0, Discriminant analysis, Design and Development, Model, Maximum Spanning Tree 1.
Gu, Fei; Wu, Hao
2016-09-01
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.
Overcoming limitations of model-based diagnostic reasoning systems
Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.
1989-01-01
The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.
Time-Limited Psychotherapy: An Interactional Stage Model.
Tracey, Terence J.
One model of successful time-limited psychotherapy characterizes the therapy as a movement through three interactional stages: the early rapport attainment stage, the middle conflict stage, and the final resolution stage. According to this model, these stages are indicated by the relative presence of communicational harmony. To examine the…
Evangelia Karagianni
2016-04-01
Full Text Available By utilizing meteorological data such as relative humidity, temperature, pressure, rain rate and precipitation duration at eight (8 stations in Aegean Archipelagos from six recent years (2007 – 2012, the effect of the weather on Electromagnetic wave propagation is studied. The EM wave propagation characteristics depend on atmospheric refractivity and consequently on Rain-Rate which vary in time and space randomly. Therefore the statistics of radio refractivity, Rain-Rate and related propagation effects are of main interest. This work investigates the maximum value of rain rate in monthly rainfall records, for a 5 min interval comparing it with different values of integration time as well as different percentages of time. The main goal is to determine the attenuation level for microwave links based on local rainfall data for various sites in Greece (L-zone, namely Aegean Archipelagos, with a view on improved accuracy as compared with more generic zone data available. A measurement of rain attenuation for a link in the S-band has been carried out and the data compared with prediction based on the standard ITU-R method.
Seyed Mostafa Hosseinalipour; Hadiseh Karimaei; Ehsan Movahednejad
2016-01-01
The maximum entropy principle (MEP) is one of the first methods which have been used to predict droplet size and velocity distributions of liquid sprays. This method needs a mean droplets diameter as an input to predict the droplet size distribution. This paper presents a new sub-model based on the deterministic aspects of liquid atom-ization process independent of the experimental data to provide the mean droplets diameter for using in the maximum entropy formulation (MEF). For this purpose, a theoretical model based on the approach of energy conservation law entitled energy-based model (EBM) is presented. Based on this approach, atomization occurs due to the kinetic energy loss. Prediction of the combined model (MEF/EBM) is in good agreement with the avail-able experimental data. The energy-based model can be used as a fast and reliable enough model to obtain a good estimation of the mean droplets diameter of a spray and the combined model (MEF/EBM) can be used to wel predict the droplet size distribution at the primary breakup.
Al-Amoudi, A.; Zhang, L. [University of Leeds (United Kingdom). School of Electronic and Electrical Engineering
2000-09-01
A neural-network-based approach for solar array modelling is presented. The logic hidden unit of the proposed network consists of a set of nonlinear radial basis functions (RBFs) which are connected directly to the input vector. The links between hidden and output units are linear. The model can be trained using a random set of data collected from a real photovoltaic (PV) plant. The training procedures are fast and the accuracy of the trained models is comparable with that of the conventional model. The principle and training procedures of the RBF-network modelling when applied to emulate the I/V characteristics of PV arrays are discussed. Simulation results of the trained RBF networks for modelling a PV array and predicting the maximum power points of a real PV panel are presented. (author)
Geometrical Modeling of Woven Fabrics Weavability-Limit New Relationships
Dalal Mohamed
2017-03-01
Full Text Available The weavability limit and tightness for 2D and 3D woven fabrics is an important factor and depends on many geometric parameters. Based on a comprehensive review of the literature on textile fabric construction and property, and related research on fabric geometry, a study of the weavability limit and tightness relationships of 2D and 3D woven fabrics was undertaken. Experiments were conducted on a representative number of polyester and cotton woven fabrics which have been woven in our workshop, using three machines endowed with different insertion systems (rapier, projectiles and air jet. Afterwards, these woven fabrics have been analyzed in the laboratory to determine their physical and mechanical characteristics using air permeability-meter and KES-F KAWABATA Evaluation System for Fabrics. In this study, the current Booten’s weavability limit and tightness relationships based on Ashenhurst’s, Peirce’s, Love’s, Russell’s, Galuszynskl’s theory and maximum-weavability is reviewed and modified as new relationships to expand their use to general cases (2D and 3D woven fabrics, all fiber materiel, all yarns etc…. The theoretical relationships were examined and found to agree with experimental results. It was concluded that the weavability limit and tightness relationships are useful tools for weavers in predicting whether a proposed fabric construction was weavable and also in predicting and explaining their physical and mechanical properties.
Theoretical model for forming limit diagram predictions without initial inhomogeneity
Gologanu, Mihai; Comsa, Dan Sorin; Banabic, Dorel
2013-05-01
We report on our attempts to build a theoretical model for determining forming limit diagrams (FLD) based on limit analysis that, contrary to the well-known Marciniak and Kuczynski (M-K) model, does not assume the initial existence of a region with material or geometrical inhomogeneity. We first give a new interpretation based on limit analysis for the onset of necking in the M-K model. Considering the initial thickness defect along a narrow band as postulated by the M-K model, we show that incipient necking is a transition in the plastic mechanism from one of plastic flow in both the sheet and the band to another one where the sheet becomes rigid and all plastic deformation is localized in the band. We then draw on some analogies between the onset of necking in a sheet and the onset of coalescence in a porous bulk body. In fact, the main advance in coalescence modeling has been based on a similar limit analysis with an important new ingredient: the evolution of the spatial distribution of voids, due to the plastic deformation, creating weaker regions with higher porosity surrounded by sound regions with no voids. The onset of coalescence is precisely the transition from a mechanism of plastic deformation in both regions to another one, where the sound regions are rigid. We apply this new ingredient to a necking model based on limit analysis, for the first quadrant of the FLD and a porous sheet. We use Gurson's model with some recent extensions to model the porous material. We follow both the evolution of a homogeneous sheet and the evolution of the distribution of voids. At each moment we test for a potential change of plastic mechanism, by comparing the stresses in the uniform region to those in a virtual band with a larger porosity. The main difference with the coalescence of voids in a bulk solid is that the plastic mechanism for a sheet admits a supplementary degree of freedom, namely the change in the thickness of the virtual band. For strain ratios close to
An extreme value model for maximum wave heights based on weather types
Rueda, Ana; Camus, Paula; Méndez, Fernando J.; Tomás, Antonio; Luceño, Alberto
2016-02-01
Extreme wave heights are climate-related events. Therefore, special attention should be given to the large-scale weather patterns responsible for wave generation in order to properly understand wave climate variability. We propose a classification of weather patterns to statistically downscale daily significant wave height maxima to a local area of interest. The time-dependent statistical model obtained here is based on the convolution of the stationary extreme value model associated to each weather type. The interdaily dependence is treated by a climate-related extremal index. The model's ability to reproduce different time scales (daily, seasonal, and interannual) is presented by means of its application to three locations in the North Atlantic: Mayo (Ireland), La Palma Island, and Coruña (Spain).
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Mind the edge! The role of adjacency matrix degeneration in maximum entropy weighted network models
Sagarra, Oleguer; Díaz-Guilera, Albert
2015-01-01
Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer valued adjacency matrices constructed from aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three datasets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring networ...
A maximum entropy model for predicting wild boar distribution in Spain
Jaime Bosch
2014-09-01
Full Text Available Wild boar (Sus scrofa populations in many areas of the Palearctic including the Iberian Peninsula have grown continuously over the last century. This increase has led to numerous different types of conflicts due to the damage these mammals can cause to agriculture, the problems they create in the conservation of natural areas, and the threat they pose to animal health. In the context of both wildlife management and the design of health programs for disease control, it is essential to know how wild boar are distributed on a large spatial scale. Given that the quantifying of the distribution of wild species using census techniques is virtually impossible in the case of large-scale studies, modeling techniques have thus to be used instead to estimate animals’ distributions, densities, and abundances. In this study, the potential distribution of wild boar in Spain was predicted by integrating data of presence and environmental variables into a MaxEnt approach. We built and tested models using 100 bootstrapped replicates. For each replicate or simulation, presence data was divided into two subsets that were used for model fitting (60% of the data and cross-validation (40% of the data. The final model was found to be accurate with an area under the receiver operating characteristic curve (AUC value of 0.79. Six explanatory variables for predicting wild boar distribution were identified on the basis of the percentage of their contribution to the model. The model exhibited a high degree of predictive accuracy, which has been confirmed by its agreement with satellite images and field surveys.
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Zhu, Ke; 10.1214/11-AOS895
2012-01-01
This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA--GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.
A Maximum-Entropy Compound Distribution Model for Extreme Wave Heights of Typhoon-Affected Sea Areas
WANG Li-ping; SUN Xiao-guang; LU Ke-bo; XU De-lun
2012-01-01
A new compound distribution model for extreme wave heights of typhoon-affected sea areas is proposed on the basis of the maximum-entropy principle.The new model is formed by nesting a discrete distribution in a continuous one,having eight parameters which can be determined in terms of observed data of typhoon occurrence-frequency and extreme wave heights by numerically solving two sets of equations derived in this paper.The model is examined by using it to predict the N-year return-periodwave height at two hydrology stations in the Yellow Sea,and the predicted results are compared with those predicted by use of some other compound distribution models.Examinations and comparisons show that the model has some advantages for predicting the N-year return-period wave height in typhoon-affected sea areas.
Arriagada, Manuel; Cipagauta, Carolina; Foppiano, Alberto
2013-05-01
A simple semi-empirical model to determine the maximum electron concentration in the ionosphere (NmF2) for South American locations is used to calculate NmF2 for a northern hemisphere station in the same longitude sector. NmF2 is determined as the sum of two terms, one related to photochemical and diffusive processes and the other one to transport mechanisms. The model gives diurnal variations of NmF2 representative for winter, summer and equinox conditions, during intervals of high and low solar activity. Model NmF2 results are compared with ionosonde observations made at Toluca-México (19.3°N; 260°E). Differences between model results and observations are similar to those corresponding to comparisons with South American observations. It seems that further improvement of the model could be made by refining the latitude dependencies of coefficients used for the transport term.
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca
Climatic impacts of fresh water hosing under Last Glacial Maximum conditions: a multi-model study
M. Kageyama
2012-08-01
Full Text Available Fresh water hosing simulations, in which a fresh water flux is imposed in the North Atlantic to force fluctuations of the Atlantic Meridional Overturning Circulation, have been routinely performed, first to study the climatic signature of different states of this circulation, then, under present or future conditions, to investigate the potential impact of a partial melting of the Greenland ice sheet. The most compelling examples of climatic changes potentially related to AMOC abrupt variations, however, are found in high resolution palaeo-records from around the globe for the last glacial period. To study those more specifically, more and more fresh water hosing experiments have been performed under glacial conditions in the recent years. Here we compare an ensemble constituted by 11 such simulations run with 6 different climate models. All simulations follow a slightly different design but are sufficiently close in their design to be compared. All study the impact of a fresh water hosing imposed in the extra-tropical North Atlantic. Common features in the model responses to hosing are the cooling over the North Atlantic, extending along the sub-tropical gyre in the tropical North Atlantic, the southward shift of the Atlantic ITCZ and the weakening of the African and Indian monsoons. On the other hand, the expression of the bipolar see-saw, i.e. warming in the Southern Hemisphere, differs from model to model, with some restricting it to the South Atlantic and specific regions of the Southern Ocean while others simulate a wide spread Southern Ocean warming. The relationships between the features common to most models, i.e. climate changes over the North and tropical Atlantic, African and Asian monsoon regions, are further quantified. These suggest a tight correlation between the temperature and precipitation changes over the extra-tropical North Atlantic, but different pathways for the teleconnections between the AMOC/North Atlantic region and
Toward a mechanistic modeling of nitrogen limitation on vegetation dynamics
Xu, Chonggang [Los Alamos National Laboratory (LANL); Fisher, Rosie [National Center for Atmospheric Research (NCAR); Wullschleger, Stan D [ORNL; Wilson, Cathy [Los Alamos National Laboratory (LANL); Cai, Michael [Los Alamos National Laboratory (LANL); McDowell, Nathan [Los Alamos National Laboratory (LANL)
2012-01-01
Nitrogen is a dominant regulator of vegetation dynamics, net primary production, and terrestrial carbon cycles; however, most ecosystem models use a rather simplistic relationship between leaf nitrogen content and photosynthetic capacity. Such an approach does not consider how patterns of nitrogen allocation may change with differences in light intensity, growing-season temperature and CO{sub 2} concentration. To account for this known variability in nitrogen-photosynthesis relationships, we develop a mechanistic nitrogen allocation model based on a trade-off of nitrogen allocated between growth and storage, and an optimization of nitrogen allocated among light capture, electron transport, carboxylation, and respiration. The developed model is able to predict the acclimation of photosynthetic capacity to changes in CO{sub 2} concentration, temperature, and radiation when evaluated against published data of V{sub c,max} (maximum carboxylation rate) and J{sub max} (maximum electron transport rate). A sensitivity analysis of the model for herbaceous plants, deciduous and evergreen trees implies that elevated CO{sub 2} concentrations lead to lower allocation of nitrogen to carboxylation but higher allocation to storage. Higher growing-season temperatures cause lower allocation of nitrogen to carboxylation, due to higher nitrogen requirements for light capture pigments and for storage. Lower levels of radiation have a much stronger effect on allocation of nitrogen to carboxylation for herbaceous plants than for trees, resulting from higher nitrogen requirements for light capture for herbaceous plants. As far as we know, this is the first model of complete nitrogen allocation that simultaneously considers nitrogen allocation to light capture, electron transport, carboxylation, respiration and storage, and the responses of each to altered environmental conditions. We expect this model could potentially improve our confidence in simulations of carbon-nitrogen interactions
Toward a mechanistic modeling of nitrogen limitation on vegetation dynamics.
Chonggang Xu
Full Text Available Nitrogen is a dominant regulator of vegetation dynamics, net primary production, and terrestrial carbon cycles; however, most ecosystem models use a rather simplistic relationship between leaf nitrogen content and photosynthetic capacity. Such an approach does not consider how patterns of nitrogen allocation may change with differences in light intensity, growing-season temperature and CO(2 concentration. To account for this known variability in nitrogen-photosynthesis relationships, we develop a mechanistic nitrogen allocation model based on a trade-off of nitrogen allocated between growth and storage, and an optimization of nitrogen allocated among light capture, electron transport, carboxylation, and respiration. The developed model is able to predict the acclimation of photosynthetic capacity to changes in CO(2 concentration, temperature, and radiation when evaluated against published data of V(c,max (maximum carboxylation rate and J(max (maximum electron transport rate. A sensitivity analysis of the model for herbaceous plants, deciduous and evergreen trees implies that elevated CO(2 concentrations lead to lower allocation of nitrogen to carboxylation but higher allocation to storage. Higher growing-season temperatures cause lower allocation of nitrogen to carboxylation, due to higher nitrogen requirements for light capture pigments and for storage. Lower levels of radiation have a much stronger effect on allocation of nitrogen to carboxylation for herbaceous plants than for trees, resulting from higher nitrogen requirements for light capture for herbaceous plants. As far as we know, this is the first model of complete nitrogen allocation that simultaneously considers nitrogen allocation to light capture, electron transport, carboxylation, respiration and storage, and the responses of each to altered environmental conditions. We expect this model could potentially improve our confidence in simulations of carbon-nitrogen interactions and the
Sullivan, T. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-05-20
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Station (ZNPS). After decommissioning is completed, the site will contain two reactor Containment Buildings, the Fuel Handling Building and Transfer Canals, Auxiliary Building, Turbine Building, Crib House/Forebay, and a Waste Water Treatment Facility that have been demolished to a depth of 3 feet below grade. Additional below ground structures remaining will include the Main Steam Tunnels and large diameter intake and discharge pipes. These additional structures are not included in the modeling described in this report but the inventory remaining (expected to be very low) will be included with one of the structures that are modeled as designated in the Zion Station Restoration Project (ZSRP) License Termination Plan (LTP). The remaining underground structures will be backfilled with clean material. The final selection of fill material has not been made.
Simulation of shape memory alloys : material modeling using the principle of maximum dissipation
2011-01-01
In dieser Arbeit werden Materialmodelle für Formgedächtnislegierungen (FGL) mittels des Konzepts der maximalen Dissipation entwickelt. Dabei werden die Hauptsätze der Thermodynamik identisch erfüllt. Weiterhin wird das Einfügen von Nebenbedingungen stark vereinfacht. Ideale Plastizität dient als Referenz. Es werden drei Materialmodelle hergeleitet, die im Rahmen der Finite-Elemente Methode ausgewertet und mit experimentellen Daten verglichen werden. Das letzte Model ist in der Lag...
Maximum likelihood cost functions for neural network models of air quality data
Dorling, Stephen R.; Foxall, Robert J.; Mandic, Danilo P.; Cawley, Gavin C.
The prediction of episodes of poor air quality using artificial neural networks is investigated, concentrating on selection of the most appropriate cost function used in training. Different cost functions correspond to different distributional assumptions regarding the data, the appropriate choice depends on whether a forecast of absolute pollutant concentration or prediction of exceedence events is of principle importance. The cost functions investigated correspond to logistic regression, homoscedastic Gaussian (i.e. conventional sum-of-squares) regression and heteroscedastic Gaussian regression. Both linear and nonlinear neural network architectures are evaluated. While the results presented relate to a dataset describing the daily time-series of the concentration of surface level ozone (O 3) in urban Berlin, the methods applied are quite general and applicable to a wide range of pollutants and locations. The heteroscedastic Gaussian regression model outperforms the other nonlinear methods investigated; however, there is little improvement resulting from the use of nonlinear rather than linear models. Of greater significance is the flexibility afforded by the nonlinear heteroscedastic Gaussian regression model for a range of potential end-users, who may all have different answers to the question: "What is more important, correctly predicting exceedences or avoiding false alarms?".
pH modeling for maximum dissolved organic matter removal by enhanced coagulation
Jiankun Xie; Dongsheng Wang; John van Leeuwen; Yanmei Zhao; Linan Xing; Christopher W. K. Chow
2012-01-01
Correlations between raw water characteristics and pH after enhanced coagulation to maximize dissolved organic matter (DOM)removal using four typical coagulants (FeCl3,Al2(SO4)3,polyaluminum chloride (PAC1) and high performance polyaluminum chloride (HPAC)) without pH control were investigated.These correlations were analyzed on the basis of the raw water quality and the chemical and physical fractionations of DOM of thirteen Chinese source waters over three seasons.It was found that the final pH after enhanced coagulation for each of the four coagulants was influenced by the content of removable DOM (i.e.hydrophobic,and higher apparent molecular weight (AMW) DOM),the alkalinity and the initial pH of raw water.A set of feed-forward semi-empirical models relating the final pH after enhanced coagulation for each of the four coagulants with the raw water characteristics were developed and optimized based on correlation analysis.The established models were preliminarily validated for prediction purposes,and it was found that the deviation between the predicted data and actual data was low.This result demonstrated the potential for the application of these models in practical operation of drinking water treatment plants.
Limits on the mass of the lightest Higgs in supersymmetric models
Masip, M; Pomarol, A
1998-01-01
In supersymmetric models extended with a gauge singlet the mass of the lightest Higgs boson has contributions proportional to the adimensional coupling $\\lambda$. In minimal scenarios, the requirement that this coupling remains perturbative up to the unification scale constrains $\\lambda$ to be smaller than $\\approx 0.7$. We study the maximum value of $\\lambda$ consistent with a perturbative unification of the gauge couplings in models containing nonstandard fields at intermediate scales. These fields appear in scenarios with gauge mediation of supersymmetry breaking. We find that the presence of extra fields can raise the maximum value of $\\lambda$ up to a 19%, increasing the limits on the mass of the lightest Higgs from 135 GeV to 155 GeV.
Ward, Sophie L.; Neill, Simon P.; Scourse, James D.; Bradley, Sarah L.; Uehara, Katsuto
2016-11-01
The spatial and temporal distribution of relative sea-level change over the northwest European shelf seas has varied considerably since the Last Glacial Maximum, due to eustatic sea-level rise and a complex isostatic response to deglaciation of both near- and far-field ice sheets. Because of the complex pattern of relative sea level changes, the region is an ideal focus for modelling the impact of significant sea-level change on shelf sea tidal dynamics. Changes in tidal dynamics influence tidal range, the location of tidal mixing fronts, dissipation of tidal energy, shelf sea biogeochemistry and sediment transport pathways. Significant advancements in glacial isostatic adjustment (GIA) modelling of the region have been made in recent years, and earlier palaeotidal models of the northwest European shelf seas were developed using output from less well-constrained GIA models as input to generate palaeobathymetric grids. We use the most up-to-date and well-constrained GIA model for the region as palaeotopographic input for a new high resolution, three-dimensional tidal model (ROMS) of the northwest European shelf seas. With focus on model output for 1 ka time slices from the Last Glacial Maximum (taken as being 21 ka BP) to present day, we demonstrate that spatial and temporal changes in simulated tidal dynamics are very sensitive to relative sea-level distribution. The new high resolution palaeotidal model is considered a significant improvement on previous depth-averaged palaeotidal models, in particular where the outputs are to be used in sediment transport studies, where consideration of the near-bed stress is critical, and for constraining sea level index points.
Wassim M. Haddad
2014-07-01
Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a
Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity
Asti, Lorenzo; Uguzzoni, Guido; Marcatili, Paolo
2016-01-01
The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high...... of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6), outperforming other sequence- and structure-based models....
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
Jorge Pereira
2015-12-01
Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.
Thien-Tong Nguyen; Doyoung Byun
2008-01-01
In the "modified quasi-steady" approach, two-dimensional (2D) aerodynamic models of flapping wing motions are analyzed with focus on different types of wing rotation and different positions of rotation axis to explain the force peak at the end of each half stroke. In this model, an additional velocity of the mid chord position due to rotation is superimposed on the translational relative velocity of air with respect to the wing. This modification produces augmented forces around the end of eachstroke. For each case of the flapping wing motions with various combination of controlled translational and rotational velocities of the wing along inclined stroke planes with thin figure-of-eight trajectory, discussions focus on lift-drag evolution during one stroke cycle and efficiency of types of wing rotation. This "modified quasi-steady" approach provides a systematic analysis of various parameters and their effects on efficiency of flapping wing mechanism. Flapping mechanism with delayed rotation around quarter-chord axis is an efficient one and can be made simple by a passive rotation mechanism so that it can be useful for robotic application.
Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)
2015-09-15
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
Modelling Limit Order Execution Times from Market Data
Kim, Adlar; Farmer, Doyne; Lo, Andrew
2007-03-01
Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.
One-dimensional XY model: Ergodic properties and hydrodynamic limit
Shuhov, A. G.; Suhov, Yu. M.
1986-11-01
We prove theorems on convergence to a stationary state in the course of time for the one-dimensional XY model and its generalizations. The key point is the well-known Jordan-Wigner transformation, which maps the XY dynamics onto a group of Bogoliubov transformations on the CAR C *-algebra over Z 1. The role of stationary states for Bogoliubov transformations is played by quasifree states and for the XY model by their inverse images with respect to the Jordan-Wigner transformation. The hydrodynamic limit for the one-dimensional XY model is also considered. By using the Jordan-Wigner transformation one reduces the problem to that of constructing the hydrodynamic limit for the group of Bogoliubov transformations. As a result, we obtain an independent motion of "normal modes," which is described by a hyperbolic linear differential equation of second order. For the XX model this equation reduces to a first-order transfer equation.
Functional State Modelling of Cultivation Processes: Dissolved Oxygen Limitation State
Olympia Roeva
2015-04-01
Full Text Available A new functional state, namely dissolved oxygen limitation state for both bacteria Escherichia coli and yeast Saccharomyces cerevisiae fed-batch cultivation processes is presented in this study. Functional state modelling approach is applied to cultivation processes in order to overcome the main disadvantages of using global process model, namely complex model structure and a big number of model parameters. Alongwith the newly introduced dissolved oxygen limitation state, second acetate production state and first acetate production state are recognized during the fed-batch cultivation of E. coli, while mixed oxidative state and first ethanol production state are recognized during the fed-batch cultivation of S. cerevisiae. For all mentioned above functional states both structural and parameter identification is here performed based on experimental data of E. coli and S. cerevisiae fed-batch cultivations.
Maximum Entropy Production vs. Kolmogorov-Sinai Entropy in a Constrained ASEP Model
Martin Mihelich
2014-02-01
Full Text Available The asymmetric simple exclusion process (ASEP has become a paradigmatic toy-model of a non-equilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the Kolmogorov-Sinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.
Multi-scale turbulence modeling and maximum information principle. Part 4
Tao, L
2015-01-01
We explore incompressible homogeneous isotropic turbulence within the fourth-order model of optimal control and optimization, in contrast to the classical works of Proudman and Reid (1954) and Tatsumi (1957), with the intention to fix specially their defect of negative energy spectrum values being developed and to examine generally the conventional closure schemes. The isotropic forms for the general and spatially degenerated fourth order correlations of fluctuating velocity are obtained and the primary dynamical equations under such isotropic forms are derived. The degenerated fourth order correlation contains four scalar functions $D_i$, $i=1,2,3,4$, whose determination is the focus of closure. We discuss the constraints of equality for these functions as required by the self-consistency of the definition of the degenerated. Furthermore, we develop the constraints of inequality for the scalar functions based on the application of the Cauchy-Schwarz inequality, the non-negativity of the variance of products,...
Experimental limits from ATLAS on Standard Model Higgs production.
ATLAS, collaboration
2012-01-01
Experimental limits from ATLAS on Standard Model Higgs production in the mass range 110-600 GeV. The solid curve reflects the observed experimental limits for the production of a Higgs of each possible mass value (horizontal axis). The region for which the solid curve dips below the horizontal line at the value of 1 is excluded with a 95% confidence level (CL). The dashed curve shows the expected limit in the absence of the Higgs boson, based on simulations. The green and yellow bands correspond (respectively) to 68%, and 95% confidence level regions from the expected limits. Higgs masses in the narrow range 123-130 GeV are the only masses not excluded at 95% CL
Limited Area Forecasting and Statistical Modelling for Wind Energy Scheduling
Rosgaard, Martin Haubjerg
forecast accuracy for operational wind power scheduling. Numerical weather prediction history and scales of atmospheric motion are summarised, followed by a literature review of limited area wind speed forecasting. Hereafter, the original contribution to research on the topic is outlined. The quality...... control of wind farm data used as forecast reference is described in detail, and a preliminary limited area forecasting study illustrates the aggravation of issues related to numerical orography representation and accurate reference coordinates at ne weather model resolutions. For the o shore and coastal...... sites studied limited area forecasting is found to deteriorate wind speed prediction accuracy, while inland results exhibit a steady forecast performance increase with weather model resolution. Temporal smoothing of wind speed forecasts is shown to improve wind power forecast performance by up to almost...
Optimal vaccination policies for an SIR model with limited resources.
Zhou, Yinggao; Yang, Kuan; Zhou, Kai; Liang, Yiting
2014-06-01
The purpose of the paper is to use analytical method and optimization tool to suggest a vaccination program intensity for a basic SIR epidemic model with limited resources for vaccination. We show that there are two different scenarios for optimal vaccination strategies, and obtain analytical solutions for the optimal control problem that minimizes the total cost of disease under the assumption of daily vaccine supply being limited. These solutions and their corresponding optimal control policies are derived explicitly in terms of initial conditions, model parameters and resources for vaccination. With sufficient resources, the optimal control strategy is the normal Bang-Bang control. However, with limited resources, the optimal control strategy requires to switch to time-variant vaccination.
Usefulness and limitations of global flood risk models
Ward, Philip; Jongman, Brenden; Salamon, Peter; Simpson, Alanna; Bates, Paul; De Groeve, Tom; Muis, Sanne; Coughlan de Perez, Erin; Rudari, Roberto; Mark, Trigg; Winsemius, Hessel
2016-04-01
Global flood risk models are now a reality. Initially, their development was driven by a demand from users for first-order global assessments to identify risk hotspots. Relentless upward trends in flood damage over the last decade have enhanced interest in such assessments. The adoption of the Sendai Framework for Disaster Risk Reduction and the Warsaw International Mechanism for Loss and Damage Associated with Climate Change Impacts have made these efforts even more essential. As a result, global flood risk models are being used more and more in practice, by an increasingly large number of practitioners and decision-makers. However, they clearly have their limits compared to local models. To address these issues, a team of scientists and practitioners recently came together at the Global Flood Partnership meeting to critically assess the question 'What can('t) we do with global flood risk models?'. The results of this dialogue (Ward et al., 2013) will be presented, opening a discussion on similar broader initiatives at the science-policy interface in other natural hazards. In this contribution, examples are provided of successful applications of global flood risk models in practice (for example together with the World Bank, Red Cross, and UNISDR), and limitations and gaps between user 'wish-lists' and model capabilities are discussed. Finally, a research agenda is presented for addressing these limitations and reducing the gaps. Ward et al., 2015. Nature Climate Change, doi:10.1038/nclimate2742
A multi agent model for the limit order book dynamics
Bartolozzi, M.
2010-01-01
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book.aEuro (c) The agents follow a noise decision making process where the
Singular limit analysis of a model for earthquake faulting
Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall
2017-01-01
In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...
Totally Asymmetric Limit for Models of Heat Conduction
De Carlo, Leonardo; Gabrielli, Davide
2017-08-01
We consider one dimensional weakly asymmetric boundary driven models of heat conduction. In the cases of a constant diffusion coefficient and of a quadratic mobility we compute the quasi-potential that is a non local functional obtained by the solution of a variational problem. This is done using the dynamic variational approach of the macroscopic fluctuation theory (Bertini et al. in Rev Mod Phys 87:593, 2015). The case of a concave mobility corresponds essentially to the exclusion model that has been discussed in Bertini et al. (J Stat Mech L11001, 2010; Pure Appl Math 64(5):649-696, 2011; Commun Math Phys 289(1):311-334, 2009) and Enaud and Derrida (J Stat Phys 114:537-562, 2004). We consider here the convex case that includes for example the Kipnis-Marchioro-Presutti (KMP) model and its dual (KMPd) (Kipnis et al. in J Stat Phys 27:6574, 1982). This extends to the weakly asymmetric regime the computations in Bertini et al. (J Stat Phys 121(5/6):843-885, 2005). We consider then, both microscopically and macroscopically, the limit of large externalfields. Microscopically we discuss some possible totally asymmetric limits of the KMP model. In one case the totally asymmetric dynamics has a product invariant measure. Another possible limit dynamics has instead a non trivial invariant measure for which we give a duality representation. Macroscopically we show that the quasi-potentials of KMP and KMPd, which are non local for any value of the external field, become local in the limit. Moreover the dependence on one of the external reservoirs disappears. For models having strictly positive quadratic mobilities we obtain instead in the limit a non local functional having a structure similar to the one of the boundary driven asymmetric exclusion process.
Ahmad Zeraatkar Moghaddam
2012-01-01
Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.
Kotera, Jan; Å roubek, Filip
2015-02-01
Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.
Raghavan, Ram K; Goodin, Douglas G; Hanzlicek, Gregg A; Zolnerowich, Gregory; Dryden, Michael W; Anderson, Gary A; Ganta, Roman R
2016-03-01
The potential distribution of Amblyomma americanum ticks in Kansas was modeled using maximum entropy (MaxEnt) approaches based on museum and field-collected species occurrence data. Various bioclimatic variables were used in the model as potentially influential factors affecting the A. americanum niche. Following reduction of dimensionality among predictor variables using principal components analysis, which revealed that the first two principal axes explain over 87% of the variance, the model indicated that suitable conditions for this medically important tick species cover a larger area in Kansas than currently believed. Soil moisture, temperature, and precipitation were highly correlated with the first two principal components and were influential factors in the A. americanum ecological niche. Assuming that the niche estimated in this study covers the occupied distribution, which needs to be further confirmed by systematic surveys, human exposure to this known disease vector may be considerably under-appreciated in the state.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
无
2007-01-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/～kmeyer/wombat.html
He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)
2015-12-28
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
Wang, J.; Parolari, A.; Huang, S. Y.
2014-12-01
The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.
Modelling across bioreactor scales: methods, challenges and limitations
Gernaey, Krist
Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...... that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what...... are the limitations of different types of mod - els? This paper will provide examples of models that have been published in the literature for use across bioreactor scales, including computational fluid dynamics (CFD) and population balance models. Furthermore, the importance of good modeling practice...
SANDA ROȘCA
2014-06-01
Full Text Available Application of Soil Loss Scenarios Using the ROMSEM Model Depending on Maximum Land Use Pretability Classes. A Case Study. Practicing a modern agriculture that takes into consideration the favourability conditions and the natural resources of a territory represents one of the main national objectives. Due to the importance of the agricultural land, which prevails among the land use types from the Niraj river basin, as well as the pedological and geomorphological characteristics, different areas where soil erosion is above the accepted thresholds were identified by applying the ROMSEM model. In order to do so, a GIS database was used, regrouping quantitative information regarding soil type, land use, climate and hydrogeology, used as indicators in the model. Estimations for the potential soil erosion have been made on the entire basin as well as on its subbasins. The essential role played by the morphometrical characteristics has also been highlighted (concavity, convexity, slope length etc.. Taking into account the strong agricultural characteristic of the analysed territory, the scoring method was employed for the identification of crop favourability in the case of wheat, barley, corn, sunflower, sugar beet, potato, soy and pea-bean. The results have been used as input data for the C coefficient (crop/vegetation and management factor in the ROMSEM model that was applied for the present land use conditions, as well as for other four scenarios depicting the land use types with maximum favourability. The theoretical, modelled values of the soil erosion were obtained dependent on land use, while the other variables of the model were kept constant.
New limit on logotropic unified dark energy models
V.M.C. Ferreira
2017-07-01
Full Text Available A unification of dark matter and dark energy in terms of a logotropic perfect dark fluid has recently been proposed, where deviations with respect to the standard ΛCDM model are dependent on a single parameter B. In this paper we show that the requirement that the linear growth of cosmic structures on comoving scales larger than 8h−1Mpc is not significantly affected with respect to the standard ΛCDM result provides the strongest limit to date on the model (B<6×10−7, an improvement of more than three orders of magnitude over previous upper limits on the value of B. We further show that this limit rules out the logotropic Unified Dark Energy model as a possible solution to the small scale problems of the ΛCDM model, including the cusp problem of Dark Matter halos or the missing satellite problem, as well as the original version of the model where the Planck energy density was taken as one of the two parameters characterizing the logotropic dark fluid.
Snowmelt runoff modeling: Limitations and potential for mitigating water disputes
Kult, Jonathan; Choi, Woonsup; Keuser, Anke
2012-04-01
SummaryConceptual snowmelt runoff models have proven useful for estimating discharge from remote mountain basins including those spanning the various ranges of the Himalaya. Such models can provide water resource managers with fairly accurate predictions of water availability for operational purposes (e.g. irrigation and hydropower). However, these models have limited ability to address characteristic components of water disputes such as diversions, storage and withholding. Contemporary disputes between India and Pakistan surrounding the snowmelt-derived water resources of the Upper Indus Basin highlight the need for improved water balance accounting methods. We present a research agenda focused on providing refined hydrological contributions to water dispute mitigation efforts.
Drosophila models of Alzheimer's disease: advances, limits, and perspectives.
Bouleau, Sylvina; Tricoire, Hervé
2015-01-01
Amyloid-β protein precursor (AβPP) and the microtubule-associated protein tau (MAPT) are the two key players involved in Alzheimer's disease (AD) and are associated with amyloid plaques and neurofibrillary tangles respectively, two key hallmarks of the disease. Besides vertebrate models, Drosophila models have been widely used to understand the complex events leading to AD in relation to aging. Drosophila benefits from the low redundancy of the genome which greatly simplifies the analysis of single gene disruption, sophisticated molecular genetic tools, and reduced cost compared to mammals. The aim of this review is to describe the recent advances in modeling AD using fly and to emphasize some limits of these models. Genetic studies in Drosophila have revealed some key aspects of the normal function of Appl and Tau, the fly homologues of AβPP and MAPT that may be disrupted during AD. Drosophila models have also been useful to uncover or validate several pathological pathways or susceptibility genes, and have been readily implemented in drug screening pipelines. We discuss some limitations of the current models that may arise from differences in structure of Appl and Tau compared to their human counterparts or from missing AβPP or MAPT protein interactors in flies. The advent of new genome modification technologies should allow the development of more realistic fly models and to better understand the relationship between AD and aging, taking advantage of the fly's short lifespan.
Abilities and limitations in the use of regional climate models
Koeltzov, Morten Andreas Oedegaard
2012-11-01
In order to say something about the effect of climate change at the regional level, one takes in use regional climate models. In these models the thesis introduce regional features, which are not included in the global climate models (which are basically in climate research). Regional models can provide good and useful climate projections that add more value than the global climate models, but also introduces an uncertainty in the calculations. How should this uncertainty affect the use of regional climate models?The most common methodology for calculating potential future climate developments are based on different scenarios of possible emissions of greenhouse gases. These scenarios operates as global climate models using physical laws and calculate possible future developments. This is considered mathematical complexed and processes with limited supercomputing capacity calculates the global models for the larger scale of the climate system. To study the effects of climate change are regional details required and the regional models used therefore in a limited area of the climate system. These regional models are driven by data from the global models and refines and improves these data. Impact studies can then use the data from the regional models or data which are further processed to provide more local details using geo-statistical methods. In the preparation of the climate projections is there a minimum of 4 sources of uncertainty. This uncertainty is related to the provision of emission scenarios of greenhouse gases, uncertainties related to the use of global climate models, uncertainty related to the use of regional climate models and the uncertainty of internal variability in the climate system. This thesis discusses the use of regional climate models, and illustrates how the regional climate model adds value to climate projections, and at the same time introduce uncertainty in the calculations. It discusses in particular the importance of the choice of
Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E
2016-07-15
The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas.
Probabilistic models of population evolution scaling limits, genealogies and interactions
Pardoux, Étienne
2016-01-01
This expository book presents the mathematical description of evolutionary models of populations subject to interactions (e.g. competition) within the population. The author includes both models of finite populations, and limiting models as the size of the population tends to infinity. The size of the population is described as a random function of time and of the initial population (the ancestors at time 0). The genealogical tree of such a population is given. Most models imply that the population is bound to go extinct in finite time. It is explained when the interaction is strong enough so that the extinction time remains finite, when the ancestral population at time 0 goes to infinity. The material could be used for teaching stochastic processes, together with their applications. Étienne Pardoux is Professor at Aix-Marseille University, working in the field of Stochastic Analysis, stochastic partial differential equations, and probabilistic models in evolutionary biology and population genetics. He obtai...
Microcavity-array superhydrophobic surfaces: Limits of the model
Salvadori, M. C.; Oliveira, M. R. S.; Spirin, R.; Teixeira, F. S.; Cattani, M.; Brown, I. G.
2013-11-01
Superhydrophobic surfaces formed of microcavities can be designed with specific desired advancing and receding contact angles using a new model described by us in prior work. Here, we discuss the limits of validity of the model, and explore the application of the model to surfaces fabricated with small cavities of radius 250 nm and with large cavities of radius 40 μm. The Wenzel model is discussed and used to calculate the advancing and receding contact angles for samples for which our model cannot be applied. We also consider the case of immersion of a sample containing microcavities in pressurized water. A consideration that then arises is that the air inside the cavities can be dissolved in the water, leading to complete water invasion into the cavities and compromising the superhydrophobic character of the surface. Here, we show that this effect does not destroy the surface hydrophobia when the surface is subsequently removed from the water.
Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer
2006-01-01
During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.
Kim, Sang M; Brannan, Kevin M; Zeckoski, Rebecca W; Benham, Brian L
2014-01-01
The objective of this study was to develop bacteria total maximum daily loads (TMDLs) for the Hardware River watershed in the Commonwealth of Virginia, USA. The TMDL program is an integrated watershed management approach required by the Clean Water Act. The TMDLs were developed to meet Virginia's water quality standard for bacteria at the time, which stated that the calendar-month geometric mean concentration of Escherichia coli should not exceed 126 cfu/100 mL, and that no single sample should exceed a concentration of 235 cfu/100 mL. The bacteria impairment TMDLs were developed using the Hydrological Simulation Program-FORTRAN (HSPF). The hydrology and water quality components of HSPF were calibrated and validated using data from the Hardware River watershed to ensure that the model adequately simulated runoff and bacteria concentrations. The calibrated and validated HSPF model was used to estimate the contributions from the various bacteria sources in the Hardware River watershed to the in-stream concentration. Bacteria loads were estimated through an extensive source characterization process. Simulation results for existing conditions indicated that the majority of the bacteria came from livestock and wildlife direct deposits and pervious lands. Different source reduction scenarios were evaluated to identify scenarios that meet both the geometric mean and single sample maximum E. coli criteria with zero violations. The resulting scenarios required extreme and impractical reductions from livestock and wildlife sources. Results from studies similar to this across Virginia partially contributed to a reconsideration of the standard's applicability to TMDL development.
Lin, M. C.; Verboncoeur, J.
2016-10-01
A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.
Merkey, Brian; Lardon, Laurent; Seoane, Jose Miguel;
2011-01-01
. By extending an individual‐based model of microbial growth and interactions to include the dynamics of plasmid carriage and transfer by individual cells, we were able to conduct in silico tests of this and other hypotheses on the dynamics of conjugal plasmid transfer in biofilms. For a generic model plasmid...... and scan speed) and spatial reach (EPS yield, conjugal pilus length) are more important for successful plasmid invasion than the recipients' growth rate or the probability of segregational loss. While this study identifies one factor that can limit plasmid invasion in biofilms, the new individual......Plasmid invasion in biofilms is often surprisingly limited in spite of the close contact of cells in a biofilm. We hypothesized that this poor plasmid spread into deeper biofilm layers is caused by a dependence of conjugation on the growth rate (relative to the maximum growth rate) of the donor...
Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser
2014-06-07
Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results.
Maximum Likelihood Fusion Model
2014-08-09
by the DLR Institute of Robotics and Mechatronics building (dataset courtesy of the University of Bre- men). In contrast to the Victoria Park dataset...Institute of Robotics and Mechatronics building (dataset courtesy of the University of Bremen). In contrast to the Victoria Park dataset, a camera sensor is
George Cristian Gruia
2013-05-01
Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programming model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.
Kohn, Matthew J.; McKay, Moriah
2010-11-01
Oxygen isotope data provide a key test of general circulation models (GCMs) for the Last Glacial Maximum (LGM) in North America, which have otherwise proved difficult to validate. High δ18O pedogenic carbonates in central Wyoming have been interpreted to indicate increased summer precipitation sourced from the Gulf of Mexico. Here we show that tooth enamel δ18O of large mammals, which is strongly correlated with local water and precipitation δ18O, is lower during the LGM in Wyoming, not higher. Similar data from Texas, California, Florida and Arizona indicate higher δ18O values than in the Holocene, which is also predicted by GCMs. Tooth enamel data closely validate some recent models of atmospheric circulation and precipitation δ18O, including an increase in the proportion of winter precipitation for central North America, and summer precipitation in the southern US, but suggest aridity can bias pedogenic carbonate δ18O values significantly.
Baofeng Shi
2016-01-01
Full Text Available This paper introduces a novel decision assessment method which is suitable for customers’ credit risk evaluation and credit decision. First of all, the paper creates an optimal credit rating model, and it consisted of an objective function and two constraint conditions. The first constraint condition of the strictly increasing LGDs eliminates the unreasonable phenomenon that the higher the credit rating is, the higher the LGD (loss given default is. Secondly, on the basis of the credit rating results, a credit decision-making assessment model based on measuring the acceptable maximum LGD of commercial banks is established. Thirdly, empirical results using the data on 2817 farmers’ microfinance of a Chinese commercial bank suggest that the proposed approach can accurately find out the good customers from all the loan applications. Moreover, our approach contributes to providing a reference for decision assessment of customers in other commercial banks in the world.
W. M. Macek
2011-05-01
Full Text Available To quantify solar wind turbulence, we consider a generalized two-scale weighted Cantor set with two different scales describing nonuniform distribution of the kinetic energy flux between cascading eddies of various sizes. We examine generalized dimensions and the corresponding multifractal singularity spectrum depending on one probability measure parameter and two rescaling parameters. In particular, we analyse time series of velocities of the slow speed streams of the solar wind measured in situ by Voyager 2 spacecraft in the outer heliosphere during solar maximum at various distances from the Sun: 10, 30, and 65 AU. This allows us to look at the evolution of multifractal intermittent scaling of the solar wind in the distant heliosphere. Namely, it appears that while the degree of multifractality for the solar wind during solar maximum is only weakly correlated with the heliospheric distance, but the multifractal spectrum could substantially be asymmetric in a very distant heliosphere beyond the planetary orbits. Therefore, one could expect that this scaling near the frontiers of the heliosphere should rather be asymmetric. It is worth noting that for the model with two different scaling parameters a better agreement with the solar wind data is obtained, especially for the negative index of the generalized dimensions. Therefore we argue that there is a need to use a two-scale cascade model. Hence we propose this model as a useful tool for analysis of intermittent turbulence in various environments and we hope that our general asymmetric multifractal model could shed more light on the nature of turbulence.
A continuum limit for the Kronig-Penney model
Colangeli, Matteo; Ndreca, Sokol; Procacci, Aldo
2015-06-01
We investigate the transmission properties of a quantum one-dimensional periodic system of fixed length L, with N barriers of constant height V and width λ and N wells of width δ. In particular, we study the behaviour of the transmission coefficient in the limit N → ∞, with L fixed. This is achieved by letting δ and λ both scale as 1/N, in such a way that their ratio γ = λ/δ is a fixed parameter characterizing the model. In this continuum limit, the multi-barrier system behaves as it were constituted by a unique barrier of constant height Eo = (γV)/(1 + γ). The analysis of the dispersion relation of the model shows the presence of forbidden energy bands at any finite N.
Heemstra de Groot, S.M.; Herrmann, O.E.
1990-01-01
An algorithm based on an alternative scheduling approach for iterative acyclic and cyclid DFGs (data-flow graphs) with limited resources that exploits inter- and intra-iteration parallelism is presented. The method is based on guiding the scheduling algorithm with the information supplied by a
Central limit theorem of linear regression model under right censorship
HE; Shuyuan(何书元); HUANG; Xiang(Heung; Wong)(黄香)
2003-01-01
In this paper, the estimation of joint distribution F(y,z) of (Y, Z) and the estimation in thelinear regression model Y = b′Z + ε for complete data are extended to that of the right censored data. Theregression parameter estimates of b and the variance of ε are weighted least square estimates with randomweights. The central limit theorems of the estimators are obtained under very weak conditions and the derivedasymptotic variance has a very simple form.
Quasineutral limit of a standard drift diffusion model for semiconductors
XIAO; Ling
2002-01-01
［1］Brenier, Y., Grenier, E., Limite singuliere de Vlasov-Poisson dans le regime de quasi neutralite: le cas independent du temps, C. R. Acad. Sci. Paris, 1994, 318: 121-124.［2］Cordier, S., Grenier, E., Quasineutral limit of Euler-Poisson system arising from plasma physics, Commun. in P. D. E., 2000, 23: 1099-1113.［3］Jüungel, A., Qualitative behavior of solutions of a degenerate nonlinear drift-diffusion model for semiconductors, Math. Models Methods Appl. Sci., 1995, 5: 497-518.［4］Chen, F., Introduction to Plasma Physics and Controlled Fusion, Vol. 1, New York: Plenum Press, 1984.［5］Ringhofer, C., An asymptotic analysis of a transient p-n-junction model, SIAM J. Appl. Math., 1987, 47: 624-642.［6］Cordier, S., Degond, P., Markowich, P. A. et al., Traveling waves analysis and jump relations for the Euler-Poisson model in the quasineutral limit, Asymptotic Anal., 1995, 11: 209-224.［7］Brézis, H., Golse, F., Sentis, R., Analyse asymptotique de l'équation de Poisson couplée la relation de Boltzmann, Quasi-neutralité des plasmas, C. R. Acad. Sci. Paris, 1995, 321: 953-959.［8］Simon, J., Compact set in the space Lp(0, T; B), Anal. Math. Pure Appl., 1987, 166: 65-96.［9］Lions, J. L., Quelques méthodes des Résolution des Problémes aux Limites non Linéaires, Paris: Dunod-Gauthier-Villard, 1969.
Rodent models of diabetic nephropathy: their utility and limitations.
Kitada, Munehiro; Ogura, Yoshio; Koya, Daisuke
2016-01-01
Diabetic nephropathy is the most common cause of end-stage renal disease. Therefore, novel therapies for the suppression of diabetic nephropathy must be developed. Rodent models are useful for elucidating the pathogenesis of diseases and testing novel therapies, and many type 1 and type 2 diabetic rodent models have been established for the study of diabetes and diabetic complications. Streptozotocin (STZ)-induced diabetic animals are widely used as a model of type 1 diabetes. Akita diabetic mice that have an Ins2+/C96Y mutation and OVE26 mice that overexpress calmodulin in pancreatic β-cells serve as a genetic model of type 1 diabetes. In addition, db/db mice, KK-Ay mice, Zucker diabetic fatty rats, Wistar fatty rats, Otsuka Long-Evans Tokushima Fatty rats and Goto-Kakizaki rats serve as rodent models of type 2 diabetes. An animal model of diabetic nephropathy should exhibit progressive albuminuria and a decrease in renal function, as well as the characteristic histological changes in the glomeruli and the tubulointerstitial lesions that are observed in cases of human diabetic nephropathy. A rodent model that strongly exhibits all these features of human diabetic nephropathy has not yet been developed. However, the currently available rodent models of diabetes can be useful in the study of diabetic nephropathy by increasing our understanding of the features of each diabetic rodent model. Furthermore, the genetic background and strain of each mouse model result in differences in susceptibility to diabetic nephropathy with albuminuria and the development of glomerular and tubulointerstitial lesions. Therefore, the validation of an animal model reproducing human diabetic nephropathy will significantly facilitate our understanding of the underlying genetic mechanisms that contribute to the development of diabetic nephropathy. In this review, we focus on rodent models of diabetes and discuss the utility and limitations of these models for the study of diabetic
Local resolution-limit-free Potts model for community detection.
Ronhovde, Peter; Nussinov, Zohar
2010-04-01
We report on an exceptionally accurate spin-glass-type Potts model for community detection. With a simple algorithm, we find that our approach is at least as accurate as the best currently available algorithms and robust to the effects of noise. It is also competitive with the best currently available algorithms in terms of speed and size of solvable systems. We find that the computational demand often exhibits superlinear scaling O(L1.3) where L is the number of edges in the system, and we have applied the algorithm to synthetic systems as large as 40 x 10(6) nodes and over 1 x 10(9) edges. A previous stumbling block encountered by popular community detection methods is the so-called "resolution limit." Being a "local" measure of community structure, our Potts model is free from this resolution-limit effect, and it further remains a local measure on weighted and directed graphs. We also address the mitigation of resolution-limit effects for two other popular Potts models.
Effective action and semiclassical limit of spin foam models
Mikovic, A
2011-01-01
We define an effective action for spin foam models of quantum gravity by adapting the background field method from quantum field theory. We show that the Regge action is the leading term in the semi-classical expansion of the spin foam effective action if the vertex amplitude has the large-spin asymptotics which is proportional to an exponential function of the vertex Regge action. In the case of the known three-dimensional and four-dimensional spin foam models this amounts to modifying the vertex amplitude such that the exponential asymptotics is obtained. In particular, we show that the ELPR/FK model vertex amplitude can be modified such that the new model is finite and has the Einstein-Hilbert action as its classical limit. We also calculate the first-order and some of the second-order quantum corrections in the semi-classical expansion of the effective action.
de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L
2010-08-01
States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.
Bopp, L; Resplandy, L; Untersee, A; Le Mezo, P; Kageyama, M
2017-09-13
All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O2sat) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).
Salces Judit
2011-08-01
Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental
Riley, Pete; Mikic, Z.; Linker, J. A.
2003-01-01
In this study we describe a series of MHD simulations covering the time period from 12 January 1999 to 19 September 2001 (Carrington Rotation 1945 to 1980). This interval coincided with: (1) the Sun s approach toward solar maximum; and (2) Ulysses second descent to the southern polar regions, rapid latitude scan, and arrival into the northern polar regions. We focus on the evolution of several key parameters during this time, including the photospheric magnetic field, the computed coronal hole boundaries, the computed velocity profile near the Sun, and the plasma and magnetic field parameters at the location of Ulysses. The model results provide a global context for interpreting the often complex in situ measurements. We also present a heuristic explanation of stream dynamics to describe the morphology of interaction regions at solar maximum and contrast it with the picture that resulted from Ulysses first orbit, which occurred during more quiescent solar conditions. The simulation results described here are available at: http://sun.saic.com.
The decoupling limit in the Georgi-Machacek model
Hartling, Katy; Logan, Heather E
2014-01-01
We study the most general scalar potential of the Georgi-Machacek model, which adds isospin-triplet scalars to the Standard Model (SM) in a way that preserves custodial SU(2) symmetry. We show that this model possesses a decoupling limit, in which the predominantly-triplet states become heavy and degenerate while the couplings of the remaining light neutral scalar approach those of the SM Higgs boson. We find that the SM-like Higgs boson couplings to fermion pairs and gauge boson pairs can deviate from their SM values by corrections as large as $\\mathcal{O}(v^2/M_{\\rm new}^2)$, where $v$ is the SM Higgs vacuum expectation value and $M_{\\rm new}$ is the mass scale of the predominantly-triplet states. In particular, the SM-like Higgs boson couplings to $W$ and $Z$ boson pairs can decouple much more slowly than in two Higgs doublet models, in which they deviate from their SM values like $\\mathcal{O}(v^4/M_{\\rm new}^4)$. Furthermore, near the decoupling limit the SM-like Higgs boson couplings to $W$ and $Z$ pairs...
Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie
2012-09-01
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.
Uchiyama, Takanori; Minamitani, Haruyuki; Sakata, Makoto
1990-01-01
The complex maximum entropy method and complex autoregressive model fitting with the singular value decomposition method (SVD) were applied to the free induction decay signal data obtained with a Fourier transform nuclear magnetic resonance spectrometer to estimate superresolved NMR spectra. The practical estimation of superresolved NMR spectra are shown on the data of phosphorus-31 nuclear magnetic resonance spectra. These methods provide sharp peaks and high signal-to-noise ratio compared with conventional fast Fourier transform. The SVD method was more suitable for estimating superresolved NMR spectra than the MEM because the SVD method allowed high-order estimation without spurious peaks, and it was easy to determine the order and the rank.
S.Padma
2011-09-01
Full Text Available Web 3.0 is an evolving extension of the web 2.0 scenario. The perceptions regarding web 3.0 is different from person to person . Web 3.0 Architecture supports ubiquitous connectivity, network computing, open identity, intelligent web, distributed databases and intelligent applications . Some of the technologies which lead to the design and development of web 3.0 applications are Artificial intelligence, Automated reasoning, Cognitive architecture, Semantic web. An attempt is made to capture the requirements of Students inline with web 3.0 so as to bridge the gap between the design and development of web 3.0 applications and requirements among Students. Maximum Spanning Tree modeling of the requirements facilitate the identification of key areas and key attributes in the design and development of software products for Students in Web 3.0 using Discriminant analysis.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental and Climate Sciences Dept.
2014-12-10
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹ as specified in 10 CFR 20 Subpart E. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on radionuclide concentrations in the fill material and the water in the interstitial spaces of the fill. (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use by ZSRP in selecting ROCs for detailed dose assessment calculations.
George C. Efthimiou
2015-06-01
Full Text Available The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I, the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Rodent models of diabetic nephropathy: their utility and limitations
Kitada M
2016-11-01
significantly facilitate our understanding of the underlying genetic mechanisms that contribute to the development of diabetic nephropathy. In this review, we focus on rodent models of diabetes and discuss the utility and limitations of these models for the study of diabetic nephropathy. Keywords: diabetic nephropathy, rodent model, albuminuria, mesangial matrix expansion, tubulointerstitial fibrosis
Ureña-López, L. Arturo; Robles, Victor H.; Matos, T.
2017-08-01
Recent analysis of the rotation curves of a large sample of galaxies with very diverse stellar properties reveals a relation between the radial acceleration purely due to the baryonic matter and the one inferred directly from the observed rotation curves. Assuming the dark matter (DM) exists, this acceleration relation is tantamount to an acceleration relation between DM and baryons. This leads us to a universal maximum acceleration for all halos. Using the latter in DM profiles that predict inner cores implies that the central surface density μDM=ρsrs must be a universal constant, as suggested by previous studies of selected galaxies, revealing a strong correlation between the density ρs and scale rs parameters in each profile. We then explore the consequences of the constancy of μDM in the context of the ultralight scalar field dark matter model (SFDM). We find that for this model μDM=648 M⊙ pc-2 and that the so-called WaveDM soliton profile should be a universal feature of the DM halos. Comparing with the data from the Milky Way and Andromeda satellites, we find that they are all consistent with a boson mass of the scalar field particle of the order of 10-21 eV /c2, which puts the SFDM model in agreement with recent cosmological constraints.
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J
2016-03-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.
Nonautonomous Food-Limited Fishery Model With Adaptive Harvesting
Idels, L V
2010-01-01
We will introduce the biological motivation of the $\\gamma$- food-limited model with variable parameters. New criteria are established for the existence and global stability of positive periodic solutions. To prove the existence of steady-state solutions, we used the upper-lower solution method where the existence of at least one positive periodic solution is obtained by constructing a pair of upper and lower solutions and application of the Friedreichs Theorem. Numerical simulations illustrate effects of periodic variation in the values of the basic biological and environmental parameters and how the adaptive harvesting strategies affect fishing stocks.
Slater, Hannah; Michael, Edwin
2012-01-01
Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.
Hannah Slater
Full Text Available Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF, in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.
G. Schmiedl
2011-05-01
Full Text Available Nine thousand years ago, the Northern Hemisphere experienced enhanced seasonality caused by an orbital configuration with a minimum of the precession index. To assess the impact of the "Holocene Insolation Maximum" (HIM on the Mediterranean Sea, we use a regional ocean general circulation model forced by atmospheric input derived from global simulations. A stronger seasonal cycle is simulated in the model, which shows a relatively homogeneous winter cooling and a summer warming with well-defined spatial patterns, in particular a subsurface warming in the Cretan and Western Levantine areas. The comparison between the SST simulated for the HIM and the reconstructions from planktonic foraminifera transfer functions shows a poor agreement, especially for summer, when the vertical temperature gradient is strong. However, a reinterpretation of the reconstructions is proposed, to consider the conditions throughout the upper water column. Such a depth-integrated approach accounts for the vertical range of preferred habitat depths of the foraminifera used for the reconstructions and strongly improves the agreement between modelled and reconstructed temperature signal. The subsurface warming is recorded by both model and proxies, with a light shift to the south in the model results. The mechanisms responsible for the peculiar subsurface pattern are found to be a combination of enhanced downwelling and wind mixing due to strengthened Etesian winds, and enhanced thermal forcing due to the stronger summer insolation in the Northern Hemisphere. Together, these processes induce a stronger heat transfer from the surface to the subsurface during late summer in the Western Levantine; this leads to an enhanced heat piracy in this region.
F. Adloff
2011-10-01
Full Text Available Nine thousand years ago (9 ka BP, the Northern Hemisphere experienced enhanced seasonality caused by an orbital configuration close to the minimum of the precession index. To assess the impact of this "Holocene Insolation Maximum" (HIM on the Mediterranean Sea, we use a regional ocean general circulation model forced by atmospheric input derived from global simulations. A stronger seasonal cycle is simulated by the model, which shows a relatively homogeneous winter cooling and a summer warming with well-defined spatial patterns, in particular, a subsurface warming in the Cretan and western Levantine areas.
The comparison between the SST simulated for the HIM and a reconstruction from planktonic foraminifera transfer functions shows a poor agreement, especially for summer, when the vertical temperature gradient is strong. As a novel approach, we propose a reinterpretation of the reconstruction, to consider the conditions throughout the upper water column rather than at a single depth. We claim that such a depth-integrated approach is more adequate for surface temperature comparison purposes in a situation where the upper ocean structure in the past was different from the present-day. In this case, the depth-integrated interpretation of the proxy data strongly improves the agreement between modelled and reconstructed temperature signal with the subsurface summer warming being recorded by both model and proxies, with a small shift to the south in the model results.
The mechanisms responsible for the peculiar subsurface pattern are found to be a combination of enhanced downwelling and wind mixing due to strengthened Etesian winds, and enhanced thermal forcing due to the stronger summer insolation in the Northern Hemisphere. Together, these processes induce a stronger heat transfer from the surface to the subsurface during late summer in the western Levantine; this leads to an enhanced heat piracy in this region, a process never
Animal models of obsessive–compulsive disorder: utility and limitations
Alonso, Pino; López-Solà, Clara; Real, Eva; Segalàs, Cinto; Menchón, José Manuel
2015-01-01
Obsessive–compulsive disorder (OCD) is a disabling and common neuropsychiatric condition of poorly known etiology. Many attempts have been made in the last few years to develop animal models of OCD with the aim of clarifying the genetic, neurochemical, and neuroanatomical basis of the disorder, as well as of developing novel pharmacological and neurosurgical treatments that may help to improve the prognosis of the illness. The latter goal is particularly important given that around 40% of patients with OCD do not respond to currently available therapies. This article summarizes strengths and limitations of the leading animal models of OCD including genetic, pharmacologically induced, behavioral manipulation-based, and neurodevelopmental models according to their face, construct, and predictive validity. On the basis of this evaluation, we discuss that currently labeled “animal models of OCD” should be regarded not as models of OCD but, rather, as animal models of different psychopathological processes, such as compulsivity, stereotypy, or perseverance, that are present not only in OCD but also in other psychiatric or neurological disorders. Animal models might constitute a challenging approach to study the neural and genetic mechanism of these phenomena from a trans-diagnostic perspective. Animal models are also of particular interest as tools for developing new therapeutic options for OCD, with the greatest convergence focusing on the glutamatergic system, the role of ovarian and related hormones, and the exploration of new potential targets for deep brain stimulation. Finally, future research on neurocognitive deficits associated with OCD through the use of analogous animal tasks could also provide a genuine opportunity to disentangle the complex etiology of the disorder. PMID:26346234
Verification of precipitation forecasts by the DWD limited area model LME over Cyprus
K. Savvidou
2007-01-01
Full Text Available A comparison is made between the precipitation forecasts by the non-hydrostatic limited area model LME of the German Weather Service (DWD and observations from a network of rain gauges in Cyprus. This is a first attempt to carry out a preliminary verification and evaluation of the LME precipitation forecasts over the area of Cyprus. For the verification, model forecasts and observations were used covering an eleven month period, from 1/2/2005 till 31/12/2005. The observations were made by three Automatic Weather Observing Systems (AWOS located at Larnaka and Paphos airports and at Athalassa synoptic station, as well as at 6, 6 and 8 rain gauges within a radius of about 30 km around these stations, respectively. The observations were compared with the model outputs, separately for each of the three forecast days. The "probability of detection" (POD of a precipitation event and the "false alarm rate" (FAR were calculated. From the selected cases of the forecast precipitation events, the average forecast precipitation amounts in the area around the three stations were compared with the measured ones. An attempt was also made to evaluate the model's skill in predicting the spatial distribution of precipitation and, in this respect, the geographical position of the maximum forecast precipitation amount was contrasted to the position of the corresponding observed maximum. Maps with monthly precipitation totals observed by a local network of 150 rain gauges were compared with the corresponding forecast precipitation maps.
Iulian Gherghel
2009-12-01
Full Text Available The sand boa (Eryx jaculus is one of the least known and rarest reptile species in Europe. InRomania, the sand boa is the rarest reptile species with only four locality records being known; atCernavodă, Cărpiniş-Giuvegea, Cochirleni and Mahmudia (Kirițescu 1903; Fuhn & Vancea 1961; Zinke &Hielscher 1990. To estimate the predictors and the probability distribution of the target species (Eryxjaculus we used MaxEnt 3.3. The potential distribution model of E. jaculus in Romania have a very goodscore performance (AUC = 0.959. The most important variables for the model are BIO13 (92.5% ofcontribution, BIO9 (3.2% of contribution, BIO17 (3% of contribution and BIO6 (1.3% of contribution.A previously mentioned hypothesis regarding the extinction of the sand boa from Romania hold theconstruction of the Danube River – Black Sea canal as the main responsable factor, this constructionhaving destroyed most of the natural habitats in which the species has been recorded (Krecsak & Iftime2006. We also support this hypothesis as the generated model indicates a suitable niche for the speciesalong the current canal area.
Rounds, Stewart A.; Sullivan, Annett B.
2013-01-01
Flow and water-quality models are being used to support the development of Total Maximum Daily Load (TMDL) plans for the Klamath River downstream of Upper Klamath Lake (UKL) in south-central Oregon. For riverine reaches, the RMA-2 and RMA-11 models were used, whereas the CE-QUAL-W2 model was used to simulate pooled reaches. The U.S. Geological Survey (USGS) was asked to review the most upstream of these models, from Link River Dam at the outlet of UKL downstream through the first pooled reach of the Klamath River from Lake Ewauna to Keno Dam. Previous versions of these models were reviewed in 2009 by USGS. Since that time, important revisions were made to correct several problems and address other issues. This review documents an assessment of the revised models, with emphasis on the model revisions and any remaining issues. The primary focus of this review is the 19.7-mile Lake Ewauna to Keno Dam reach of the Klamath River that was simulated with the CE-QUAL-W2 model. Water spends far more time in the Lake Ewauna to Keno Dam reach than in the 1-mile Link River reach that connects UKL to the Klamath River, and most of the critical reactions affecting water quality upstream of Keno Dam occur in that pooled reach. This model review includes assessments of years 2000 and 2002 current conditions scenarios, which were used to calibrate the model, as well as a natural conditions scenario that was used as the reference condition for the TMDL and was based on the 2000 flow conditions. The natural conditions scenario included the removal of Keno Dam, restoration of the Keno reef (a shallow spot that was removed when the dam was built), removal of all point-source inputs, and derivation of upstream boundary water-quality inputs from a previously developed UKL TMDL model. This review examined the details of the models, including model algorithms, parameter values, and boundary conditions; the review did not assess the draft Klamath River TMDL or the TMDL allocations
The limitations of mathematical modeling in high school physics education
Forjan, Matej
The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems
Continuous time limits of the Utterance Selection Model
Michaud, Jérôme
2016-01-01
In this paper, we derive new continuous time limits of the Utterance Selection Model (USM) for language change (Baxter et al., Phys. Rev. E {\\bf 73}, 046118, 2006). This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a new continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, can not be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the \\emph{heterogeneous mean field} approximation. This approximation groups the behaviour of nodes of same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks:...
Force Limited Random Vibration Test of TESS Camera Mass Model
Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.
2015-01-01
The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.
A multi agent model for the limit order book dynamics
Bartolozzi, M.
2010-11-01
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The agents follow a noise decision making process where their actions are related to a stochastic variable, the market sentiment, which we define as a mixture of public and private information. The model, despite making just few basic assumptions over the trading strategies of the agents, is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.
Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang
2014-05-01
Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.
2013-12-01
Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of
Network model of human aging: Frailty limits and information measures
Farrell, Spencer G.; Mitnitski, Arnold B.; Rockwood, Kenneth; Rutenberg, Andrew D.
2016-11-01
Aging is associated with the accumulation of damage throughout a persons life. Individual health can be assessed by the Frailty Index (FI). The FI is calculated simply as the proportion f of accumulated age-related deficits relative to the total, leading to a theoretical maximum of f ≤1 . Observational studies have generally reported a much more stringent bound, with f ≤fmaxcomputationally accelerated network model that also allows us to tune the scale-free network exponent α . The network exponent α significantly affects the growth of mortality rates with age. However, we are only able to recover fmax by also introducing a deficit sensitivity parameter 1 -q , which is equivalent to a false-negative rate q . Our value of q =0.3 is comparable to finite sensitivities of age-related deficits with respect to mortality that are often reported in the literature. In light of nonzero q , we use mutual information I to provide a nonparametric measure of the predictive value of the FI with respect to individual mortality. We find that I is only modestly degraded by q topology of aging populations.
Huang, Yu
Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.
Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te
2015-01-01
Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology.
Nair, Sandeep P.; Shiau, Deng-Shan; Principe, Jose C.; Iasemidis, Leonidas D.; Pardalos, Panos M.; Norman, Wendy M.; Carney, Paul R.; Sackellares, J. Chris
2009-01-01
Analysis of intracranial electroencephalographic (iEEG) recordings in patients with temporal lobe epilepsy (TLE) has revealed characteristic dynamical features that distinguish the interictal, ictal, and postictal states and inter-state transitions. Experimental investigations into the mechanisms underlying these observations require the use of an animal model. A rat TLE model was used to test for differences in iEEG dynamics between well-defined states and to test specific hypotheses: 1) the short-term maximum Lyapunov exponent (STLmax), a measure of signal order, is lowest and closest in value among cortical sites during the ictal state, and highest and most divergent during the postictal state; 2) STLmax values estimated from the stimulated hippocampus are the lowest among all cortical sites; and 3) the transition from the interictal to ictal state is associated with a convergence in STLmax values among cortical sites. iEEGs were recorded from bilateral frontal cortices and hippocampi. STLmax and T-index (a measure of convergence/divergence of STLmax between recorded brain areas) were compared among the four different periods. Statistical tests (ANOVA and multiple comparisons) revealed that ictal STLmax was lower (p < 0.05) than other periods, STLmax values corresponding to the stimulated hippocampus were lower than those estimated from other cortical regions, and T-index values were highest during the postictal period and lowest during the ictal period. Also, the T-index values corresponding to the preictal period were lower than those during the interictal period (p < 0.05). These results indicate that a rat TLE model demonstrates several important dynamical signal characteristics similar to those found in human TLE and support future use of the model to study epileptic state transitions. PMID:19100262
Modeling multiple resource limitation in tropical dry forests
Medvigy, D.; Xu, X.; Zarakas, C.
2015-12-01
Tropical dry forests (TDFs) are characterized by a long dry season when little rain falls. At the same time, many neotropical soils are highly weathered and relatively nutrient poor. Because TDFs are often subject to both water and nutrient constraints, the question of how they will respond to environmental perturbations is both complex and highly interesting. Models, our basic tools for projecting ecosystem responses to global change, can be used to address this question. However, few models have been specifically parameterized for TDFs. Here, we present a new version of the Ecosystem Demography 2 (ED2) model that includes a new parameterization of TDFs. In particular, we focus on the model's framework for representing limitation by multiple resources (carbon, water, nitrogen, and phosphorus). Plant functional types are represented in terms of a dichotomy between "acquisitive" and "conservative" resource acquisition strategies. Depending on their resource acquisition strategy and basic stoichiometry, plants can dynamically adjust their allocation to organs (leaves, stem, roots), symbionts (e.g. N2-fixing bacteria), and mycorrhizal fungi. Several case studies are used to investigate how resource acquisition strategies affect ecosystem responses to environmental perturbations. Results are described in terms of the basic setting (e.g., rich vs. poor soils; longer vs. shorter dry season), and well as the type and magnitude of environmental perturbation (e.g., changes in precipitation or temperature; changes in nitrogen deposition). Implications for ecosystem structure and functioning are discussed.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
European Continental Scale Hydrological Model, Limitations and Challenges
Rouholahnejad, E.; Abbaspour, K.
2014-12-01
The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
T. Tharammal
2013-03-01
Full Text Available To understand the validity of δ18O proxy records as indicators of past temperature change, a series of experiments was conducted using an atmospheric general circulation model fitted with water isotope tracers (Community Atmosphere Model version 3.0, IsoCAM. A pre-industrial simulation was performed as the control experiment, as well as a simulation with all the boundary conditions set to Last Glacial Maximum (LGM values. Results from the pre-industrial and LGM simulations were compared to experiments in which the influence of individual boundary conditions (greenhouse gases, ice sheet albedo and topography, sea surface temperature (SST, and orbital parameters were changed each at a time to assess their individual impact. The experiments were designed in order to analyze the spatial variations of the oxygen isotopic composition of precipitation (δ18Oprecip in response to individual climate factors. The change in topography (due to the change in land ice cover played a significant role in reducing the surface temperature and δ18Oprecip over North America. Exposed shelf areas and the ice sheet albedo reduced the Northern Hemisphere surface temperature and δ18Oprecip further. A global mean cooling of 4.1 °C was simulated with combined LGM boundary conditions compared to the control simulation, which was in agreement with previous experiments using the fully coupled Community Climate System Model (CCSM3. Large reductions in δ18Oprecip over the LGM ice sheets were strongly linked to the temperature decrease over them. The SST and ice sheet topography changes were responsible for most of the changes in the climate and hence the δ18Oprecip distribution among the simulations.
Wang Huai-Chun
2009-09-01
Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
Chen Pоуu
2013-01-01
Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.
J. Y. Tang
2015-08-01
Full Text Available We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1 formulate the biogeochemical processes with a matrix of stoichiometric coefficients and (2 apply Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulation approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy Land Model (ALM of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a Century-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1 produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions; and (2 properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed; and (3 successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.
Indranarain Ramlall
2012-06-01
Full Text Available Many countries have adopted important policies in view of curbing the number of injuries/fatal road accidents with the most important being speed limit enforcement. In that respect, Mauritius has recently embarked on a strategy of using cameras in view of detecting violations to speed limits. However, the empirical literature on speed limit offenders is still very poor in terms of modeling. In essence, this paper constitutes the very first study that provides sound econometric modeling for speed limit offenders. Findings suggest that vanilla GARCH can be used to model the number of speed limit offenders. Above all, leverage effects are also noted, clearly showing the importance of the type of traffic flow of speed limit offenders which underpins the non-compliance/breach to speed limits. Furthermore, results show the presence of strong weekend effects as confirmed by the dummy variable. The research is expected to provide a momentum in the use of GARCH models for traffic modeling not only for Mauritius but also for other countries in the world.
Continuous time limits of the utterance selection model
Michaud, Jérôme
2017-02-01
In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.
A mathematical model to detect inspiratory flow limitation during sleep.
Mansour, Khaled F; Rowley, James A; Meshenish, A A; Shkoukani, Mahdi A; Badr, M Safwan
2002-09-01
The physiological significance of inspiratory flow limitation (IFL) has recently been recognized, but methods of detecting IFL can be subjective. We sought to develop a mathematical model of the upper airway pressure-flow relationship that would objectively detect flow limitation. We present a theoretical discussion that predicts that a polynomial function [F(P) = AP(3) + BP(2) + CP + D, where F(P) is flow and P is supraglottic pressure] best characterizes the pressure-flow relationship and allows for the objective detection of IFL. In protocol 1, step 1, we performed curve-fitting of the pressure-flow relationship of 20 breaths to 5 mathematical functions and found that highest correlation coefficients (R(2)) for quadratic (0.88 +/- 0.10) and polynomial (0.91 +/- 0.05; P polynomial functions and found that the error fit was lowest for the polynomial function (3.3 +/- 0.06 vs. 21.1 +/- 19.0%; P 99% for each). We conclude that a polynomial function can be used to predict the relationship between pressure and flow in the upper airway and objectively determine the presence of IFL.
HEAVY TRAFFIC LIMIT THEOREMS IN FLUID BUFFER MODELS
YIN Gang; ZHANG Hanqin
2004-01-01
A fluid buffer model with Markov modulated input-output rates is considered.When traffic intensity is near its critical value, the system is known as in heavy traffic.It is shown that a suitably scaled sequence of the equilibrium buffer contents has a weakor distributional limit under heavy traffic conditionsThis weak limit is a functional of adiffusion process determined by the Markov chain modulating the input and output rates.The first passage time of the reflected process is examinedIt is shown that the mean firstpassage time can be obtained via a solution of a Dirichlet problemThen the transitiondensity of the reflected process is derived by solving the Kolmogorov forward equation witha Neumann boundary conditionFurthermore, when the fast changing part of the generatorof the Markov chain is a constant matrix, the representation of the probability distributionof the reflected process is derivedUpper and lower bounds of the probability distributionare also obtained by means of asymptotic expansions of standard normal distribution.
Applications and limitations of in silico models in drug discovery.
Sacan, Ahmet; Ekins, Sean; Kortagere, Sandhya
2012-01-01
Drug discovery in the late twentieth and early twenty-first century has witnessed a myriad of changes that were adopted to predict whether a compound is likely to be successful, or conversely enable identification of molecules with liabilities as early as possible. These changes include integration of in silico strategies for lead design and optimization that perform complementary roles to that of the traditional in vitro and in vivo approaches. The in silico models are facilitated by the availability of large datasets associated with high-throughput screening, bioinformatics algorithms to mine and annotate the data from a target perspective, and chemoinformatics methods to integrate chemistry methods into lead design process. This chapter highlights the applications of some of these methods and their limitations. We hope this serves as an introduction to in silico drug discovery.
Luo, Hong; Ma, You-xin; Liu, Wen-jun; Li, Hong-mei
2010-05-01
By using maximum upstream flow path, a self-developed new method for calculating slope length value based on Arc Macro Language (AML), five groups of DEM data for different regions in Bijie Prefecture of Guizhou Province were extracted to compute the slope length and topographical factors in the Prefecture. The time cost for calculating the slope length and the values of the topographical factors were analyzed, and compared with those by iterative slope length method based on AML (ISLA) and on C++ (ISLC). The results showed that the new method was feasible to calculate the slope length and topographical factors in revised universal soil loss model, and had the same effect as iterative slope length method. Comparing with ISLA, the new method had a high computing efficiency and greatly decreased the time consumption, and could be applied to a large area to estimate the slope length and topographical factors based on AML. Comparing with ISLC, the new method had the similar computing efficiency, but its coding was easily to be written, modified, and debugged by using AML. Therefore, the new method could be more broadly used by GIS users.
Harry X.ZHANG; Shaw L.YU
2008-01-01
One of the key challenges in the total max-imum daily load (TMDL) development process is how to define the critical condition for a receiving water-body. The main concern in using a continuous simu-lation approach is the absence of any guarantee that the most critical condition will be captured during the selected representative hydrologic period, given the scar-city of long-term continuous data. The objectives of this paper are to clearly address the critical condition in the TMDL development process and to compare continu-ous and evEnt-based approaches in defining critical con-dition during TMDL development for a waterbody impacted by both point and nonpoint source pollution. A practical, event-based critical flow-storm (CFS) approach was developed to explicitly addresses the crit-ical condition as a combination of a low stream flow and a storm event of a selected magnitude, both having cer-tain frequencies of occurrence. This paper illustrated the CFS concept and provided its theoretical basis using a derived analytical conceptual model. The CFS approach clearly defined a critical condition, obtained reasonable results and could be considered as an alternative method in TMDL development.
Mathematical model of diffusion-limited evolution of multiple gas bubbles in tissue.
Srinivasan, R Srini; Gerth, Wayne A; Powell, Michael R
2003-04-01
Models of gas bubble dynamics employed in probabilistic analyses of decompression sickness incidence in man must be theoretically consistent and simple, if they are to yield useful results without requiring excessive computations. They are generally formulated in terms of ordinary differential equations that describe diffusion-limited gas exchange between a gas bubble and the extravascular tissue surrounding it. In our previous model (Ann. Biomed. Eng. 30: 232-246, 2002), we showed that with appropriate representation of sink pressures to account for gas loss or gain due to heterogeneous blood perfusion in the unstirred diffusion region around the bubble, diffusion-limited bubble growth in a tissue of finite volume can be simulated without postulating a boundary layer across which gas flux is discontinuous. However, interactions between two or more bubbles caused by competition for available gas cannot be considered in this model, because the diffusion region has a fixed volume with zero gas flux at its outer boundary. The present work extends the previous model to accommodate interactions among multiple bubbles by allowing the diffusion region volume of each bubble to vary during bubble evolution. For given decompression and tissue volume, bubble growth is sustained only if the bubble number density is below a certain maximum.
Animal models of β-hemoglobinopathies: utility and limitations
McColl B
2016-11-01
Full Text Available Bradley McColl, Jim Vadolas Cell and Gene Therapy Laboratory, Murdoch Childrens Research Institute, Royal Children’s Hospital, Parkville, VIC, Australia Abstract: The structural and functional conservation of hemoglobin throughout mammals has made the laboratory mouse an exceptionally useful organism in which to study both the protein and the individual globin genes. Early researchers looked to the globin genes as an excellent model in which to examine gene regulation – bountifully expressed and displaying a remarkably consistent pattern of developmental activation and silencing. In parallel with the growth of research into expression of the globin genes, mutations within the β-globin gene were identified as the cause of the β-hemoglobinopathies such as sickle cell disease and β-thalassemia. These lines of enquiry stimulated the development of transgenic mouse models, first carrying individual human globin genes and then substantial human genomic fragments incorporating the multigenic human β-globin locus and regulatory elements. Finally, mice were devised carrying mutant human β-globin loci on genetic backgrounds deficient in the native mouse globins, resulting in phenotypes of sickle cell disease or β-thalassemia. These years of work have generated a group of model animals that display many features of the β-hemoglobinopathies and provided enormous insight into the mechanisms of gene regulation. Substantive differences in the expression of human and mouse globins during development have also come to light, revealing the limitations of the mouse model, but also providing opportunities to further explore the mechanisms of globin gene regulation. In addition, animal models of β-hemoglobinopathies have demonstrated the feasibility of gene therapy for these conditions, now showing success in human clinical trials. Such models remain in use to dissect the molecular events of globin gene regulation and to identify novel treatments based
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Guoqing Li
2014-11-01
Full Text Available Black locust (Robinia pseudoacacia L. is a tree species of high economic and ecological value, but is also considered to be highly invasive. Understanding the global potential distribution and ecological characteristics of this species is a prerequisite for its practical exploitation as a resource. Here, a maximum entropy modeling (MaxEnt was used to simulate the potential distribution of this species around the world, and the dominant climatic factors affecting its distribution were selected by using a jackknife test and the regularized gain change during each iteration of the training algorithm. The results show that the MaxEnt model performs better than random, with an average test AUC value of 0.9165 (±0.0088. The coldness index, annual mean temperature and warmth index were the most important climatic factors affecting the species distribution, explaining 65.79% of the variability in the geographical distribution. Species response curves showed unimodal relationships with the annual mean temperature and warmth index, whereas there was a linear relationship with the coldness index. The dominant climatic conditions in the core of the black locust distribution are a coldness index of −9.8 °C–0 °C, an annual mean temperature of 5.8 °C–14.5 °C, a warmth index of 66 °C–168 °C and an annual precipitation of 508–1867 mm. The potential distribution of black locust is located mainly in the United States, the United Kingdom, Germany, France, the Netherlands, Belgium, Italy, Switzerland, Australia, New Zealand, China, Japan, South Korea, South Africa, Chile and Argentina. The predictive map of black locust, climatic thresholds and species response curves can provide globally applicable guidelines and valuable information for policymakers and planners involved in the introduction, planting and invasion control of this species around the world.
J. G. Fyke
2010-08-01
Full Text Available The need to better understand long-term climate/ice sheet feedback loops is motivating efforts to couple ice sheet models into Earth System models which are capable of long-timescale simulations. In this paper we describe a coupled model, that consists of the University of Victoria Earth System Climate Model (UVic ESCM and the Pennsylvania State University Ice model (PSUI. The climate model generates a surface mass balance (SMB field via a sub-gridded surface energy/moisture balance model that resolves narrow ice sheet ablation zones. The ice model returns revised elevation, surface albedo and ice area fields, plus coastal fluxes of heat and moisture. An arbitrary number of ice sheets can be simulated, each on their own high-resolution grid and each capable of synchronous or asynchronous coupling with the overlying climate model. The model is designed to conserve global heat and moisture. In the process of improving model performance we developed a procedure to account for modelled surface air temperature (SAT biases within the energy/moisture balance surface model and improved the UVic ESCM snow surface scheme through addition of variable albedos and refreezing over the ice sheet.
A number of simulations for late Holocene, Last Glacial Maximum (LGM, and Eemian climate boundary conditions were carried out to explore the sensitivity of the coupled model and identify model configurations that best represented these climate states. The modelled SAT bias was found to play a significant role in long-term ice sheet evolution, as was the effect of refreezing meltwater and surface albedo. The bias-corrected model was able to reasonably capture important aspects of the Antarctic and Greenland ice sheets, including modern SMB and ice distribution. The simulated northern Greenland ice sheet was found to be prone to ice margin retreat at radiative forcings corresponding closely to those of the Eemian or the present-day.
J. G. Fyke
2011-03-01
Full Text Available The need to better understand long-term climate/ice sheet feedback loops is motivating efforts to couple ice sheet models into Earth System models which are capable of long-timescale simulations. In this paper we describe a coupled model that consists of the University of Victoria Earth System Climate Model (UVic ESCM and the Pennsylvania State University Ice model (PSUI. The climate model generates a surface mass balance (SMB field via a sub-gridded surface energy/moisture balance model that resolves narrow ice sheet ablation zones. The ice model returns revised elevation, surface albedo and ice area fields, plus coastal fluxes of heat and moisture. An arbitrary number of ice sheets can be simulated, each on their own high-resolution grid and each capable of synchronous or asynchronous coupling with the overlying climate model. The model is designed to conserve global heat and moisture. In the process of improving model performance we developed a procedure to account for modelled surface air temperature (SAT biases within the energy/moisture balance surface model and improved the UVic ESCM snow surface scheme through addition of variable albedos and refreezing over the ice sheet.
A number of simulations for late Holocene, Last Glacial Maximum (LGM, and Eemian climate boundary conditions were carried out to explore the sensitivity of the coupled model and identify model configurations that best represented these climate states. The modelled SAT bias was found to play a significant role in long-term ice sheet evolution, as was the effect of refreezing meltwater and surface albedo. The bias-corrected model was able to reasonably capture important aspects of the Antarctic and Greenland ice sheets, including modern SMB and ice distribution. The simulated northern Greenland ice sheet was found to be prone to ice margin retreat at radiative forcings corresponding closely to those of the Eemian or the present-day.
Examination of 1D Solar Cell Model Limitations Using 3D SPICE Modeling: Preprint
McMahon, W. E.; Olson, J. M.; Geisz, J. F.; Friedman, D. J.
2012-06-01
To examine the limitations of one-dimensional (1D) solar cell modeling, 3D SPICE-based modeling is used to examine in detail the validity of the 1D assumptions as a function of sheet resistance for a model cell. The internal voltages and current densities produced by this modeling give additional insight into the differences between the 1D and 3D models.
Elsa W Birch
Full Text Available Viral replication relies on host metabolic machinery and precursors to produce large numbers of progeny - often very rapidly. A fundamental example is the infection of Escherichia coli by bacteriophage T7. The resource draw imposed by viral replication represents a significant and complex perturbation to the extensive and interconnected network of host metabolic pathways. To better understand this system, we have integrated a set of structured ordinary differential equations quantifying T7 replication and an E. coli flux balance analysis metabolic model. Further, we present here an integrated simulation algorithm enforcing mutual constraint by the models across the entire duration of phage replication. This method enables quantitative dynamic prediction of virion production given only specification of host nutritional environment, and predictions compare favorably to experimental measurements of phage replication in multiple environments. The level of detail of our computational predictions facilitates exploration of the dynamic changes in host metabolic fluxes that result from viral resource consumption, as well as analysis of the limiting processes dictating maximum viral progeny production. For example, although it is commonly assumed that viral infection dynamics are predominantly limited by the amount of protein synthesis machinery in the host, our results suggest that in many cases metabolic limitation is at least as strict. Taken together, these results emphasize the importance of considering viral infections in the context of host metabolism.
Carozza, D. A.; Mysak, L. A.
2012-04-01
The Paleocene-Eocene thermal maximum (PETM), approximately 55 million years ago, was a period of intense climate and environmental change that was associated with the release of unprecedented amounts of light carbon to the ocean-atmosphere system. This event is documented by large negative carbon isotope excursions (CIEs) in oceanic and terrestrial environments, by an abrupt shoaling of the lysocline and calcite compensation depth, and by significant increases in average global temperature. Due to its 13C-depleted isotopic composition and strong atmospheric radiative forcing, methane is thought to have played a pivotal role during the PETM. Recent high-resolution geochemical records indicate that the PETM has a more complex structure than was apparent in earlier records. In particular, ocean sediment cores indicate that the PETM CIE was composed of three notable excursions separated by two 20-ky periods of negligible del13C change. Moreover, a 3-ky warming period that occurred prior to the PETM CIE has indicated that the carbon release that caused the initial CIE may not have produced the initial warming, as was previously postulated and modeled.In this study, we couple an atmospheric methane box model to a box model of the global carbon cycle, which is tuned to the background state of the PETM, in order to constrain the carbon emission and assess the role of methane. The initial 4 ky of the PETM are modeled as two separate stages involving: 1) a gradual warming with little or no lysocline shoaling or CIE, and 2) an abrupt warming, lysocline shoaling, and a CIE. For each stage, a range of atmospheric and oceanic emission scenarios representing different amounts, rates, and isotopic content of emitted carbon are simulated, and then compared to the sedimentary record. The sensitivity of the results to changes in climate sensitivity, global temperature change, lysocline shoaling, CIE, and background carbon dioxide concentration, among other variables, is tested. We
Wang, L.; Kerr, L. A.; Bridger, E.
2016-12-01
Changes in species distributions have been widely associated with climate change. Understanding how ocean conditions influence marine fish distributions is critical for elucidating the role of climate in ecosystem change and forecasting how fish may be distributed in the future. Species distribution models (SDMs) can enable estimation of the likelihood of encountering species in space or time as a function of environmental conditions. Traditional SDMs are applied to scientific-survey data that include both presences and absences. Maximum entropy (MaxEnt) models are promising tools as they can be applied to presence-only data, such as those collected from fisheries or citizen science programs. We used MaxEnt to relate the occurrence records of marine fish species (e.g. Atlantic herring, Atlantic mackerel, and butterfish) from NOAA Northeast Fisheries Observer Program to environmental conditions. Environmental variables from earth system data, such as sea surface temperature (SST), sea bottom temperature (SBT), Chlorophyll-a, bathymetry, North Atlantic oscillation (NAO), and Atlantic multidecadal oscillation (AMO), were matched with species occurrence for MaxEnt modeling the fish distributions in Northeast Shelf area. We developed habitat suitability maps for these species, and assessed the relative influence of environmental factors on their distributions. Overall, SST and Chlorophyll-a had greatest influence on their monthly distributions, with bathymetry and SBT having moderate influence and climate indices (NAO and AMO) having little influence. Across months, Atlantic herring distribution was most related to SST 10th percentile, and Atlantic mackerel and butterfish distributions were most related to previous month SST. The fish distributions were most affected by previous month Chlorophyll-a in summer months, which may indirectly indicate the accumulative impact of primary productivity. Results highlighted the importance of spatial and temporal scales when using
Mente, Scot; Doran, Angela; Wager, Travis T
2012-06-14
The objective of this work was to establish that unbound maximum concentrations may be reasonably predicted from a combination of computed molecular properties assuming subcutaneous (SQ) dosing. Additionally, we show that the maximum unbound plasma and brain concentrations may be projected from a mixture of in vitro absorption, distribution, metabolism, excretion experimental parameters in combination with computed properties (volume of distribution, fraction unbound in microsomes). Finally, we demonstrate the utility of the underlying equations by showing that the maximum total plasma concentrations can be projected from the experimental parameters for a set of compounds with data collected from clinical research.
Huang, S.-Y.; Wang, J.
2016-07-01
A coupled force-restore model of surface soil temperature and moisture (FRMEP) is formulated by incorporating the maximum entropy production model of surface heat fluxes and including the gravitational drainage term. The FRMEP model driven by surface net radiation and precipitation are independent of near-surface atmospheric variables with reduced sensitivity to the uncertainties of model input and parameters compared to the classical force-restore models (FRM). The FRMEP model was evaluated using observations from two field experiments with contrasting soil moisture conditions. The modeling errors of the FRMEP predicted surface temperature and soil moisture are lower than those of the classical FRMs forced by observed or bulk formula based surface heat fluxes (bias 1 ~ 2°C versus ~4°C, 0.02 m3 m-3 versus 0.05 m3 m-3). The diurnal variations of surface temperature, soil moisture, and surface heat fluxes are well captured by the FRMEP model measured by the high correlations between the model predictions and observations (r ≥ 0.84). Our analysis suggests that the drainage term cannot be neglected under wet soil condition. A 1 year simulation indicates that the FRMEP model captures the seasonal variation of surface temperature and soil moisture with bias less than 2°C and 0.01 m3 m-3 and correlation coefficients of 0.93 and 0.9 with observations, respectively.
Low-energy limit of the extended Linear Sigma Model
Divotgey, Florian; Giacosa, Francesco; Rischke, Dirk H
2016-01-01
The extended Linear Sigma Model (eLSM) is an effective hadronic model based on the linear realization of chiral symmetry $SU(N_f)_L \\times SU(N_f)_R$, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the eLSM for $N_f=2$ flavors by integrating out all fields except for the pions, the (pseudo-)Nambu--Goldstone bosons of chiral symmetry breaking. We only keep terms entering at tree level and up to fourth order in powers of derivatives of the pion fields. Up to this order, there are four low-energy coupling constants in the resulting low-energy effective action. We show that the latter is formally identical to Chiral Perturbation Theory (ChPT), after choosing a representative for the coset space generated by chiral symmetry breaking and expanding up to fourth order in powers of derivatives of the pion fields. Two of the low-energy coupling constants of the eLSM are uniquely determined by a fit to hadron masses and decay widths. We find that thei...
A unified model of density limit in fusion plasmas
Zanca, P.; Sattin, F.; Escande, D. F.; Pucella, G.; Tudisco, O.
2017-05-01
In this work we identify by analytical and numerical means the conditions for the existence of a magnetic and thermal equilibrium of a cylindrical plasma, in the presence of Ohmic and/or additional power sources, heat conduction and radiation losses by light impurities. The boundary defining the solutions’ space having realistic temperature profile with small edge value takes mathematically the form of a density limit (DL). Compared to previous similar analyses the present work benefits from dealing with a more accurate set of equations. This refinement is elementary, but decisive, since it discloses a tenuous dependence of the DL on the thermal transport for configurations with an applied electric field. Thanks to this property, the DL scaling law is recovered almost identical for two largely different devices such as the ohmic tokamak and the reversed field pinch. In particular, they have in common a Greenwald scaling, linearly depending on the plasma current, quantitatively consistent with experimental results. In the tokamak case the DL dependence on any additional heating approximately follows a 0.5 power law, which is compatible with L-mode experiments. For a purely externally heated configuration, taken as a cylindrical approximation of the stellarator, the DL dependence on transport is found stronger. By adopting suitable transport models, DL takes on a Sudo-like form, in fair agreement with LHD experiments. Overall, the model provides a good zeroth-order quantitative description of the DL, applicable to widely different configurations.
Abbasnia, Mohsen; Tavousi, Taghi; Khosravi, Mahmood
2016-08-01
Identification and assessment of climate change in the next decades with the aim of appropriate environmental planning in order to adapt and mitigate its effects are quite necessary. In this study, maximum temperature changes of Iran were comparatively examined in two future periods (2041-2070 and 2071-2099) and based on the two general circulation model outputs (CGCM3 and HADCM3) and under existing emission scenarios (A2, A1B, B1 and B2). For this purpose, after examining the ability of statistical downscaling method of SDSM in simulation of the observational period (1981-2010), the daily maximum temperature of future decades was downscaled by considering the uncertainty in seven synoptic stations as representatives of climate in Iran. In uncertainty analysis related to model-scenarios, it was found that CGCM3 model under scenario B1 had the best performance about the simulation of future maximum temperature among all of the examined scenario-models. The findings also showed that the maximum temperature at study stations will be increased between 1°C and 2°C in the middle and the end of 21st century. Also this maximum temperature changes is more severe in the HADCM3 model than the CGCM3 model.
NSGIC State | GIS Inventory — Environmental Modeling dataset current as of 1999. Florida Adopted TMDLs. What is a TMDL (Total Maximum Daily Load)? A scientific determination of the maximum amount...
Izumi, Kenji; Bartlein, Patrick J.
2016-10-01
The inverse modeling through iterative forward modeling (IMIFM) approach was used to reconstruct Last Glacial Maximum (LGM) climates from North American fossil pollen data. The approach was validated using modern pollen data and observed climate data. While the large-scale LGM temperature IMIFM reconstructions are similar to those calculated using conventional statistical approaches, the reconstructions of moisture variables differ between the two approaches. We used two vegetation models, BIOME4 and BIOME5-beta, with the IMIFM approach to evaluate the effects on the LGM climate reconstruction of differences in water use efficiency, carbon use efficiency, and atmospheric CO2 concentrations. Although lower atmospheric CO2 concentrations influence pollen-based LGM moisture reconstructions, they do not significantly affect temperature reconstructions over most of North America. This study implies that the LGM climate was very cold but not very much drier than present over North America, which is inconsistent with previous studies.
Establishing the limits of efficiency of perovskite solar cells from first principles modeling
Grånäs, Oscar; Vinichenko, Dmitry; Kaxiras, Efthimios
2016-11-01
The recent surge in research on metal-halide-perovskite solar cells has led to a seven-fold increase of efficiency, from ~3% in early devices to over 22% in research prototypes. Oft-cited reasons for this increase are: (i) a carrier diffusion length reaching hundreds of microns; (ii) a low exciton binding energy; and (iii) a high optical absorption coefficient. These hybrid organic-inorganic materials span a large chemical space with the perovskite structure. Here, using first-principles calculations and thermodynamic modelling, we establish that, given the range of band-gaps of the metal-halide-perovskites, the theoretical maximum efficiency limit is in the range of ~25–27%. Our conclusions are based on the effect of level alignment between the perovskite absorber layer and carrier-transporting materials on the performance of the solar cell as a whole. Our results provide a useful framework for experimental searches toward more efficient devices.
Rezende, L. C.; Arenque, B.; von Randow, C.; Moura, M. S.; Aidar, S. D.; Buckeridge, M. S.; Menezes, R.; Souza, L. S.; Ometto, J. P.
2013-12-01
The Caatinga biome in the semi-arid region of northeastern Brazil is extremely important due to its biodiversity and endemism. This biome, which is under high anthropogenic influences, presents high levels of environmental degradation, land use being among the main causes of such degradation. The simulations of land cover and the vegetation dynamic under different climate scenarios are important features for prediction of environmental risks and determination of sustainable pathways for the planet in the future. Modeling of the vegetation can be performed by use of dynamic global vegetation models (DGVMs). The DGVMs simulate the surface processes (e.g. transfer of energy, water, CO2 and momentum); plant physiology (e.g. photosynthesis, stomatal conductance) phenology; gross and net primary productivity, respiration, plant species classified by functional traits; competition for light, water and nutrients, soil characteristics and processes (e.g. nutrients, heterotrophic respiration). Currently, most of the parameters used in DGVMs are static pre-defined values, and the lack of observational information to aid choosing the most adequate values for these parameters is particularly critical for the semi-arid regions in the world. Through historical meteorological data and measurements of carbon assimilation we aim to calibrate the maximum carboxylation velocity (Vcmax), for the native species Poincianella microphylla, abundant in the Caatinga region. The field data (collected at Lat: 90 2' S, Lon: 40019' W) displayed two contrasting meteorological conditions, with precipitations of 16 mm and 104 mm prior to the sampling campaigns (April 9-13, 2012 and February 4-8, 2013; respectively). Calibration (obtaining values of Vcmax more suitable for vegetation of Caatinga) has been performed through an algorithm of pattern recognition: Classification And Regression Tree (CART) and calculation of the vapor pressure deficit (VPD), which was used as attribute for discrimination