Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  multi-core
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
A variety of thermal models has been proposed to predict the temperatures inside modern processors. In this paper, we describe and compare two such approaches, a detailed FEMbased simulation and a simpler architectural compact model. It is shown that both models provide comparable results when it comes to predicting the maximal temperature, however there are also non-negligible differences when estimating thermal gradients within a chip. Furthermore, transient simulation results show some differences in temperature profile during processor heating.
EN
Multiple core processors have already became the dominant design for general purpose CPUs. Incarnations of this technology are present in solutions dedicated to such areas like computer graphics, signal processing and also computer networking. Since the key functionality of network core components is fast package servicing, multicore technology, due to multi tasking ability, seems useful to support packet processing. Dedicated network processors characterize very good performance but at the same time high cost. General purpose CPUs achieve incredible performance, thanks to task distribution along several available cores and relatively low cost. The idea, analyzed in this paper, is to use general purpose CPU to provide network core functionality. For this purpose parameterized system model has been created, which represents general core networking needs. This model analyze system parameters influence on system performance.
EN
In this paper, the implementation of a Parallel Genetic Algorithm (PGA) for the training stage, and the optimi zation of a monolithic and modular neural network, for pattern recognition are presented. The optimization con sists in obtaining the best architecture in layers, and neu rons per layer achieving the less training error in a shor ter time. The implementation was performed in a multicore architecture, using parallel programming techniques to exploit its resources. We present the results obtained in terms of performance by comparing results of the training stage for sequential and parallel implementations.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.