Search results for "PROB"

showing 10 items of 8859 documents

Reduced Order Models for Pricing American Options under Stochastic Volatility and Jump-diffusion Models

2016

American options can be priced by solving linear complementary problems (LCPs) with parabolic partial(-integro) differential operators under stochastic volatility and jump-diffusion models like Heston, Merton, and Bates models. These operators are discretized using finite difference methods leading to a so-called full order model (FOM). Here reduced order models (ROMs) are derived employing proper orthogonal decomposition (POD) and non negative matrix factorization (NNMF) in order to make pricing much faster within a given model parameter variation range. The numerical experiments demonstrate orders of magnitude faster pricing with ROMs. peerReviewed

ta113Mathematical optimizationStochastic volatilityDiscretizationComputer scienceJump diffusionFinite difference method010103 numerical & computational mathematics01 natural sciencesNon-negative matrix factorization010101 applied mathematicsValuation of optionslinear complementary problemRange (statistics)General Earth and Planetary SciencesApplied mathematicsreduced order modelFinite difference methods for option pricing0101 mathematicsAmerican optionoption pricingGeneral Environmental ScienceProcedia Computer Science
researchProduct

Iterative Methods for Pricing American Options under the Bates Model

2013

We consider the numerical pricing of American options under the Bates model which adds log-normally distributed jumps for the asset value to the Heston stochastic volatility model. A linear complementarity problem (LCP) is formulated where partial derivatives are discretized using finite differences and the integral resulting from the jumps is evaluated using simple quadrature. A rapidly converging fixed point iteration is described for the LCP, where each iterate requires the solution of an LCP. These are easily solved using a projected algebraic multigrid (PAMG) method. The numerical experiments demonstrate the efficiency of the proposed approach. Furthermore, they show that the PAMG meth…

ta113Mathematical optimizationStochastic volatilityDiscretizationIterative methodComputer scienceFinite difference methodLinear complementarity problemIterative methodQuadrature (mathematics)Multigrid methodFixed-point iterationBates modelLinear complementarity problemGeneral Earth and Planetary SciencesPartial derivativeAmerican optionGeneral Environmental ScienceProcedia Computer Science
researchProduct

Do videowikis on the web support better (constructivist) learning in the basics of information systems science?

2012

This paper describes the combination of a wiki and screen capture videos as a complementary addition to conventional lectures in an information management and information systems development course. Our basis was collaborative problem-based learning with the problems defined by students. The idea was that students were expected to find concepts or issues from four lecture themes which are not well-defined or clarified for them. The students worked in small groups of two or three students or they completed the coursework individually. First, the students selected the theme which was most unclear for them. Second, the students selected the problematic things from this area and created the pre…

ta113MultimediaComputer scienceCollaborative learningConnectivismcomputer.software_genreJigsawConstructivist teaching methodsProblem-based learningConstructivism (philosophy of education)CourseworkComputingMilieux_COMPUTERSANDEDUCATIONMathematics educationInformation systemcomputer
researchProduct

IMEX schemes for pricing options under jump–diffusion models

2014

We propose families of IMEX time discretization schemes for the partial integro-differential equation derived for the pricing of options under a jump-diffusion process. The schemes include the families of IMEX-midpoint, IMEX-CNAB and IMEX-BDF2 schemes. Each family is defined by a convex combination parameter [email protected]?[0,1], which divides the zeroth-order term due to the jumps between the implicit and explicit parts in the time discretization. These IMEX schemes lead to tridiagonal systems, which can be solved extremely efficiently. The schemes are studied through Fourier stability analysis and numerical experiments. It is found that, under suitable assumptions and time step restric…

ta113Numerical AnalysisMathematical optimizationTridiagonal matrixDiscretizationApplied MathematicsJump diffusionStability (probability)Term (time)Computational MathematicsValuation of optionsConvex combinationLinear multistep methodMathematicsApplied Numerical Mathematics
researchProduct

Evaluating the performance of artificial neural networks for the classification of freshwater benthic macroinvertebrates

2014

Abstract Macroinvertebrates form an important functional component of aquatic ecosystems. Their ability to indicate various types of anthropogenic stressors is widely recognized which has made them an integral component of freshwater biomonitoring. The use of macroinvertebrates in biomonitoring is dependent on manual taxa identification which is currently a time-consuming and cost-intensive process conducted by highly trained taxonomical experts. Automated taxa identification of macroinvertebrates is a relatively recent research development. Previous studies have displayed great potential for solutions to this demanding data mining application. In this research we have a collection of 1350 …

ta113Radial basis function networkEcologyArtificial neural networkComputer sciencebusiness.industryApplied MathematicsEcological Modelingta1172PerceptronMachine learningcomputer.software_genreBackpropagationComputer Science ApplicationsProbabilistic neural networkIdentification (information)Computational Theory and MathematicsModeling and SimulationMultilayer perceptronConjugate gradient methodta1181Artificial intelligencebusinesscomputerEcology Evolution Behavior and SystematicsEcological Informatics
researchProduct

Real-time recognition of personal routes using instance-based learning

2011

Predicting routes is a critical enabler for many new location-based applications and services, such as warning drivers about congestion- or accident-risky areas. Hybrid vehicles can also utilize the route prediction for optimizing their charging and discharging phases. In this paper, a new lightweight route recognition approach using instance-based learning is introduced. In this approach, the current route is compared in real-time against the route instances observed in past, and the most similar route is selected. In order to assess the similarity between the routes, a similarity measure based on the longest common subsequence (LCSS) is employed, and an algorithm for incrementally evaluat…

ta113Similarity (geometry)business.industryComputer scienceSimilarity measureMachine learningcomputer.software_genreLongest common subsequence problemGlobal Positioning SystemRoute recognitionInstance-based learningArtificial intelligencebusinesscomputer2011 IEEE Intelligent Vehicles Symposium (IV)
researchProduct

Omission of Causal Indicators: Consequences and Implications for Measurement – A Rejoinder

2016

ta113Statistics and ProbabilityApplied Mathematics05 social sciences050401 social sciences methodsResearch needsEducation0504 sociology0502 economics and businesscausal indicatorsmeasurementPositive economicsPsychologySocial psychology050203 business & managementCausal modelMeasurement: Interdisciplinary Research and Perspectives
researchProduct

Context–content systems of random variables : The Contextuality-by-Default theory

2016

Abstract This paper provides a systematic yet accessible presentation of the Contextuality-by-Default theory. The consideration is confined to finite systems of categorical random variables, which allows us to focus on the basics of the theory without using full-scale measure-theoretic language. Contextuality-by-Default is a theory of random variables identified by their contents and their contexts, so that two variables have a joint distribution if and only if they share a context. Intuitively, the content of a random variable is the entity the random variable measures or responds to, while the context is formed by the conditions under which these measurements or responses are obtained. A …

ta113Theoretical computer scienceComputer scienceApplied Mathematicscouplings05 social sciencesta111Probabilistic logicContext (language use)01 natural sciencesMeasure (mathematics)050105 experimental psychologyconnectednessKochen–Specker theoremrandom variablesJoint probability distribution0103 physical sciences0501 psychology and cognitive sciencescontextualityNegative number010306 general physicsCategorical variableRandom variableGeneral PsychologyJournal of Mathematical Psychology
researchProduct

Listwise Collaborative Filtering

2015

Recently, ranking-oriented collaborative filtering (CF) algorithms have achieved great success in recommender systems. They obtained state-of-the-art performances by estimating a preference ranking of items for each user rather than estimating the absolute ratings on unrated items (as conventional rating-oriented CF algorithms do). In this paper, we propose a new ranking-oriented CF algorithm, called ListCF. Following the memory-based CF framework, ListCF directly predicts a total order of items for each user based on similar users' probability distributions over permutations of the items, and thus differs from previous ranking-oriented memory-based CF algorithms that focus on predicting th…

ta113business.industryComputer scienceRecommender systemMachine learningcomputer.software_genreRankingcollaborative filteringBenchmark (computing)Collaborative filteringProbability distributionPairwise comparisonData miningArtificial intelligencerecommender systemsbusinessFocus (optics)computerranking-oriented collaborative filtering
researchProduct

Towards Computer-based Exams in CS1

2017

Even though IDEs are often a central tool when learning to program in CS1, many teachers still lean on paper-based exams. In this study, we examine the “test mode effect” in CS1 exams using the Rainfall problem. The test mode was two-phased. Half of the participants started working on the problem with pen and paper, while the other half had access to an IDE. After submitting their solution, all students could rework their solution on an IDE. The experiment was repeated twice during subsequent course instances. The results were mixed. From the marking perspective, there was no statistically significant difference resulting from the mode. However, the students starting with the paper-based pa…

ta113examinations (education)tietokoneavusteinen opetusMultimediaComputer scienceRainfall problemComputer basedtentitvasta-alkajatcomputer.software_genreprogrammingcomputer-assisted teachingbeginnersComputingMilieux_COMPUTERSANDEDUCATIONta516ohjelmointiCS1computer
researchProduct