0000000000264262

AUTHOR

I.z. Mihu

showing 3 related works from this author

Statistical analysis of multilayer perceptrons performances

2002

The paper is based on a series of studies on the learning capabilities of multilayered perceptrons (MLP). The complexity of these nonlinear systems can be varied, acting for instance on the number of hidden units, but we will be confronted with a choice dilemma, concerning the optimal complexity of the system for a given problem. By the mean of statistical methods, we have found that the effective number of hidden units is smaller than the potential size; some units have a "binary" activation level or a time constant activation. We also prove that weight initialization to small values is recommended and reduce the effective size of the hidden layer.

Nonlinear systemSeries (mathematics)Computer sciencebusiness.industryInitializationStatistical analysisArtificial intelligencePerceptronbusinessAlgorithmIJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
researchProduct

Improving Karhunen-Loeve based transform coding by using square isometries

2002

We propose, for an image compression system based on the Karhunen-Loeve transform implemented by neural networks, to take into consideration the 8 square isometries of an image block. The proper isometry applied puts the 8*8 square image block in a standard position, before applying the image block as input to the neural network architecture. The standard position is defined based on the variance of its four 4*4 sub-blocks (quadro partitioned) and brings the sub-block having the greatest variance in a specific corner and in another specific adjoining corner the sub-block having the second variance (if this is not possible the third is considered). The use of this "preprocessing" phase was e…

Karhunen–Loève theoremTheoretical computer scienceArtificial neural networkCompression (functional analysis)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONAlgorithmSquare (algebra)Transform codingData compressionMathematicsBlock (data storage)Image compression
researchProduct

Architectural improvements and FPGA implementation of a multimodel neuroprocessor

2003

Since neural networks (NNs) require an enormous amount of learning time, various kinds of dedicated parallel computers have been developed. In the paper a 2-D systolic array (SA) of dedicated processing elements (PEs) also called systolic cells (SCs) is presented as the heart of a multimodel neural-network accelerator. The instruction set of the SA allows the implementation of several neural algorithms, including error back propagation and a self organizing feature map algorithm. Several special architectural facilities are presented in the paper in order to improve the 2-D SA performance. A swapping mechanism of the weight matrix allows the implementation of NNs larger than 2-D SA. A systo…

Instruction setArtificial neural networkComputer architectureComputer scienceFeature (machine learning)Systolic arrayParallel computingDifference-map algorithmField-programmable gate arrayBackpropagationWord (computer architecture)Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.
researchProduct