#
IMPACT: Parallel Algorithms and Financial Models

## Available subjects

**Summary of subproject "Parallel Algorithms and Financial Models"**

In the subproject "Parallel Algorithms and Financial Models" of the IMPACT-project the focus is on parallel models for relation detection (data mining, in particular dependency modeling) and prediction purposes to be used in various financial markets (stock markets, international indices, etc.). The term "parallel models" implies that the models which have been developed run on both a distributed network of (in our case UNIX) workstations and on a parallel computer (in our case a 128-node nCUBE2 parallel computer). With respect to the kind of models the emphasis is on GMDH-models, where GMDH stands for Group Method of Data Handling. It involves a neural network like approach, in which the layers of neurons with linear or quadratic transfer function are to be built-up, trained, tested and trimmed layer by layer. An object-oriented implementation with a high level of object abstraction has been developed and realized. The parallel implementation is based on the so-called Divide & Conquer principle. The consequence of employing object-orientation, object abstraction and parallel execution is that the finally resulting program (GMDH version 4) is very flexible with respect to experimentation possibilities, i.e. adding, changing and deleting objects is very easy and the response and training of the system is fast due to parallel execution. The parallel performance was found to be very good (efficiency of 80-90%). This implementation is a sound basis for experimentation with different configurations of GMDH-models (o.a. trying various selection criteria and various variants of the basic GMDH, a.o. the powerful so-called heuristic-free GMDH) in order to find the most optimal configuration given a certain application. It is also easy now to mix or embed GMDH-models with or in other models in order to investigate alternatives. The GMDH-models are currently used for relation detection (data mining, in particular dependency modeling), i.e. out of a (large) set of possible input variables the actually relevant input variables should be detected (on the basis of time series data) which primarily are responsible for the measured output. Moreover, prediction of the future behavior of the output variables can be based on this relation.

**People**

**Links**

####
Paul Water
Last modified: Mon Nov 15 15:49:07 MET 1999