0000000000171959

AUTHOR

Janis Zuters

showing 5 related works from this author

Near Real-Time Data Warehousing with Multi-stage Trickle and Flip

2011

A data warehouse typically is a collection of historical data designed for decision support, so it is updated from the sources periodically, mostly on a daily basis. Today’s business however asks for fresher data. Real-time warehousing is one of the trends to accomplish this, but there are a number of challenges to move towards true real-time. This paper proposes ‘Multi-stage Trickle & flip’ methodology for data warehouse refreshment. It is based on the ‘Trickle & flip’ principle and extended in order to further insulate loading and querying activities, thus enabling both of them to be more efficient.

Multi stageDecision support systemDatabaseComputer scienceOrder (business)computer.software_genrecomputerTRICKLEData warehouse
researchProduct

Modelling of Adequate Costs of Utilities Services

2016

The paper propose methodology for benchmark modelling of adequate costs of utilities services, which is based on the data analysis of the factual cases (key performance indicators of utilities as the predictors). The proposed methodology was tested by modelling of Latvian water utilities with three tools: (1) a classical version of the multi-layer perceptron with error back-propagation training algorithm was sharpened up with task-specific monotony tests, (2) the fitting of the generalized additive model using the programming language R ensured the opportunity to evaluate the statistical significance and confidence bands of predictors, (3) the sequential iterative nonlinear regression proce…

CorrelationMean squared errorComputer science020209 energyMultilayer perceptronGeneralized additive modelStatistics0202 electrical engineering electronic engineering information engineeringDeviance (statistics)02 engineering and technologyPerformance indicatorPerceptronNonlinear regression
researchProduct

Realizing Undelayed N-step TD prediction with neural networks

2010

There exist various techniques to extend reinforcement learning algorithms, e.g., eligibility traces and planning. In this paper, an approach is proposed, which combines several extension techniques, such as using eligibility-like traces, using approximators as value functions and exploiting the model of the environment. The obtained method, ‘Undelayed n-step TD prediction’ (TD-P), has produced competitive results when put in conditions of not fully observable environment.

Dynamic programmingArtificial neural networkComputer sciencebusiness.industryValue (computer science)Reinforcement learningObservableExtension (predicate logic)Artificial intelligencebusinessMelecon 2010 - 2010 15th IEEE Mediterranean Electrotechnical Conference
researchProduct

CN2-R: Faster CN2 with randomly generated complexes

2011

Among the rule induction algorithms, the classic CN2 is still one of the most popular ones; a great amount of enhancements and improvements to it is to witness this. Despite the growing computing capacities since the algorithm was proposed, one of the main issues is resource demand. The proposed modification, CN2-R, substitutes the star concept of the original algorithm with a technique of randomly generated complexes in order to substantially improve on running times without significant loss in accuracy.

Weighted Majority AlgorithmTheoretical computer scienceRule inductionComputer sciencePopulation-based incremental learningStability (learning theory)Online machine learningProbabilistic analysis of algorithmsAlgorithm designStar (graph theory)Algorithm2011 16th International Conference on Methods & Models in Automation & Robotics
researchProduct

Sequence Q-learning: A memory-based method towards solving POMDP

2015

Partially observable Markov decision process (POMDP) models a control problem, where states are only partially observable by an agent. The two main approaches to solve such tasks are these of value function and direct search in policy space. This paper introduces the Sequence Q-learning method which extends the well known Q-learning algorithm towards the ability to solve POMDPs through adding a special sequence management framework by advancing from action values to “sequence” values and including the “sequence continuity principle”.

SequenceComputer sciencebusiness.industryQ-learningPartially observable Markov decision processMarkov processContext (language use)Markov modelsymbols.namesakeBellman equationsymbolsArtificial intelligenceMarkov decision processbusiness2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR)
researchProduct