Group for Computer Architecture and Design
Department of Computer and Information Science (IDI)
Norwegian University of Science and Technology (NTNU)


Research plan


Table of contents

  1. Status
  2. Scientific profile
  3. Areas of research (Activities and interests)
    1. Field Programmable Gate Arrays (FPGA) and Dynamically Reconfigurable Hardware (continued)
    2. Modelling Communication in Message-Passing MIMD Computers (continued)
    3. Neural networks/RENNS (continued/reduced)
    4. HDL simulation and synthesis (continued /new)
    5. BSPlab - a laboratory for experimenting with BSP-programs on different parallel architectures (continued/new)
    6. Application Oriented Architecture Design on a Flexible Implementation Platform (continued / new / "glue activity" )
    7. Relations between the research themes
  4. Goals (1997--2002)
  5. Actions (1997--2002)
  6. Expectations about the future
  7. Publications (since 1992)

1. Status

Faculty
Professor emeritus and scientific adviser Olav Landsverk
Professor in Computer Design N N
Professor in Computer Architecture Lasse Natvig
Professor-II in Computer Design N N
Førsteamanuensis Olav B. Brusdal
Førsteamanuensis Pauline Haddow
Amanuensis Jan A. Mathisen

Courses given


40031 Laboratory (for faculty 43)
78001 Laboratory (for faculty 9)
78014 Computers, basic course
SIE4005 Digital design and computer fundamentals
78060 Computer systems
78062 Computer design
78064 Computer architecture
78076 Computer projects
78912 Computer architecture 2

Dr. students
Jarle Greipsland, Neural networks, RENNS multiprocessor, (expected to finish 1998)
Knut Førland, HDL Based Architectural Synthesis (from 1 june 1997, expect. to finish 2001)
Kjell Magne Sæterbø, Topic not decided yet (from 1 sept 1998, expected to finish 2002)

Other facts:
Phone: +47 7359 3440
Fax: +47 7359 4466
Our homepage

2. Scientific profile

The scope of the group is computer systems technology comprising computer architecture and design. The research forms a base for educating highly demanded siv.ing. and dr.ing. students with a broad, updated and practical knowledge of both hardware and system software. Focus is at the interaction and partitioning between hardware and software, in particular reconfigurable hardware as a compromise between these.

3. Areas of research (Activities and interests)

The group faces a tradeoff between the broad knowledge needed to master the whole design process of a computer system and the need to focus on smaller research subtopics to be able to obtain good research results on the international arena. Based on expectations about the future technical developments (see appendix A) and exisiting and new competence in the group we will focus on six research themes in the next 5 years. At the end of the section we describe how these activities constitutes an integrated research effort.

3.1 Field Programmable Gate Arrays (FPGA) and Dynamically Reconfigurable Hardware (continued)
Reconfigurable hardware gives the possibility to combine the advantages of a computer architecture adapted to a specific task with the flexibility of a programmed solution. This facilitates the hardware/software codesign and contributes to a comprehensive view of the design of data processing systems. A pure hardware solution, e.g., application specific integrated circuit, will have the advantages of speed and compactness. On the other hand a pure software solution will have its advantages in flexibility and ability for adaptation to specific applications. The dynamically reconfigurable hardware will to a certain extent be a compromise between these two extremes. It will in many situations be a valuable addition to the tools available in the design of computer systems. Since the group is concentrating on the intersection between hardware and software, and has experience within this field, we will continue the research in the area. (The dr.ing. research of Jan Anders Mathisen is in this area).

Responsible: Jan Anders Mathisen

3.2. Modelling Communication in Message-Passing MIMD Computers> (continued)
The communication bottleneck in parallel computers is an acknowledged limitation for reaching performance goals. Current research which focuses on reducing these limitations has given rise to two problems: 1) proposed solutions which increase performance often increase costs and 2) lack of comparability in performance models due to simplifying or restricting assumptions. In this work, the major goal is to establish a flexible model structure, providing cost and performance analysis. To this end, a hierarchical modular structure, parameterised by design issues at the top-level and including synthesised HDL modules at the bottom-level is being developed.
Responsible: Pauline Haddow

3.3 Neural networks/RENNS (continued/reduced)
Nearly ten years ago, neural networks was selected to link the research activities within the group to an advanced application of computer technology. Neural networks lend themselves to parallel processing as they are inherently massively parallel, and to reconfigurability as there are a variety of neural network architectures. The group has built a dynamically reconfigurable multiprocessor called RENNS (Reconfigurable Neural Network Server). RENNS is a research environment for experimentation with neural networks. The implementation is based on a fairly general emulator using both parallel architecture and dynamically reconfigurable hardware. Its general form is thus a dynamically reconfigurable parallel processing system that may be adapted to various applications, in particular different neural network architectures. It consists of a set of modules comprising processor, memory and communication. These modules may be configured freely in a number of configurations. Reconfiguration is performed dynamically by loading a configuration file. The project has resulted in four dr.ing.theses and numerous publications (see Section 6). The group has one dr.ing. student finishing his work in the project.

Responsible: Olav Landsverk and Jarle Greipsland

3.4 HDL simulation and synthesis (continued /new)
Synthesis is a general term used to describe the process of converting a high level design description to a lower more detailed level. The goal is to make an implementation or to come closer to what can be implemented automatically by further synthesis steps. CAD tools and libraries play crucial roles in the process. Their use by highly skilled designers is necessarry to reduce the design time and cost. The description is most often written in a HDL language such as VHDL or Verilog. (HDL = Hardware Description Language).

The group has experience within system level simulation which is strongly linked to this area. The group wants to improve its knowledge within development of systems descriptions in synthesisable HDL, the various synthesis steps and the relevant tools. This is to speed up the process of bringing architectural concepts and system designs to implementations in hardware and combined HW/SW systems.

There are three levels of synthesis: 1) Architectural (or algorithmic, or behavioral) synthesis, 2) RTL level synthesis and 3) Logic level (or gate level) synthesis. At the logic level there are CAD tools doing most of the job. The group should be able to use such tools, but research in this area belongs to the Department of Physical Electronics, Faculty of Electrical Engineering and Telecommunication. RTL synthesis is a relatively mature area, but also very important in current and future HW-design projects. It should be covered more thoroghly both in our teaching and research. Architectural synthesis is more in its infancy, and will be a major research challenge for the group in the coming 5-10 years. (We will apply for dr.ing. scholarships in this area).

Industrial companies close to NTNU such as Nordic VLSI and Atmel have excellent competence in RTL level and logic level synthesis. We will establish a cooperation with these companies in this area. We will learn from them at the two lowest levels, and we hope to bring important research results to them from the highest levels of synthesis. A new dr.ing. student, Knut Førland will do research in this area.

Responsible: Knut Førland/Lasse Natvig/Torstein Heggebø (Nordic VLSI).

3.5 BSPlab - a laboratory for experimenting with BSP-programs on different parallel architectures (continued/new)
The diploma students Uthus and Dybdahl has developed a software laboratory for experimenting with real BSP programs on different parallel computer architectures. (BSP = Bulk Synchronous Parallel, a model proposed by Leslie Valiant). The programs are written in C++ using the BSPlib standard. The main benefits of BSP is easier programming (since the BSP lib is very simple and due to the semisynchronous nature of BSP computations), and also the possibility of developing both efficient and portable parallel programs. (A consequence of the BSP concept). The user may choose among several base architectures, and may specify its central characteristica. Experiments for running applications on a range of "computer instances" are easily conducted. In the long term, studies of BSP computations and architectures may give a theoretical and experimental base for design of specialised parallel computers within the group, not unlike the role of neural networks for the RENNS project.

The BSPlab was made available on the internet in january 1998.

Responsible: Lasse Natvig + diploma students.

3.6 Application Oriented Architecture Design on a Flexible Implementation Platform (continued / new / "glue activity" )
This topic binds together the research efforts of our group. In the last years, much experience has been gained in this area through the RENNS project. In addition, numerous diploma projects have resulted in prototypes of application specific computing systems realised in FPGA.

In designing a computer system for a particular application one basic approach is to adapt the structure of the system to the structure of the application. Development of basic knowledge in this area is part of the activities in the group. Included is the exploration of the conditions necessary to successfully adapt a parallel processing structure to an application. Methods and tools for such adaptations are also central issues. Application specific computer architectures implemented using dynamically reconfigurable hardware offers very interesting possibilities. The adaptation of a parallel architecture to an application can be made.

Our work continues in the this field but is not limited to special-purpose designs. Also general-purpose architectures must be designed with consideration given to the demands that applications will put on such a machine. The use of simulation, synthesis and dynamically reconfigurable technology allows us to experiment with ideas at all levels of the design process.


Responsible: Pauline Haddow, Jan Anders Mathisen and Lasse Natvig.

3.7 Relations between the research themes

The following figure shows some of the relations between the research themes outlined above.

(figur omitted in this version)

The red arrows indicate how we aim to have projects that brings computational needs from applications down to realisation of computer systems through the necessarry levels of technology. This is central for research area 3.6. Linked to the red arrows are feedback from simulations and implementations done at the lower levels. The TRCsim project is a verilog simulation model developed by Natvig during the sabbatical at Nordic VLSI. It links the BSPlab activity to the router work by Haddow. Message traces from BSP applications run on BSPlab configured to a 2D mesh structure gives input to TRCsim. The TRC simulation may give performance results that are useful for "calibrating" the performance models used in the BSPlab for this kind of message passing architectures. HDL synthesis may give implementations of (parts of) these projects.

4. Goals (1997--2002)

The group will: (Action numbers in paranthesis)

5. Actions (1997--2002)


  1. Start activity within HDL synthesis. Keywords are dr.ing. student Knut Førland, get CAD tools up running (Intergraph/VeriBest), projects by diploma students, collaboration with Nordic VLSI and/or Atmel, Trondheim.
  2. We will approach the Norwegian Research Council and a selection of relevant Norwegian Industrial companies within our field of research with the aim to form a joint application for 2-3 dr.ing. scholarships starting in 1998. An additional effect of visits to the selected companies will be feedback on our research and teaching. Existing contacts will be given first priority, among them Tandberg Telecom.
  3. Internal colloquiums to increase our competence in new and emerging topics. In the autumn semester 1997 we will carry out a colloquium within methods and technology for automated synthesis of hardware from HDL descriptions.
  4. We expect a development in computer architecture and design towards an increased use of formal methods and automated design. The group will strengthen its activities within these areas. The group will work for getting a fixed "amanuensis"-position with CAD of computer systems as main responsibility. It is necessary for the group to obtain a broad and up to date knowledge of CAD for computer systems. The person should assist teaching and research within the group, besides being responsible for the CAD part of our research projects. As long as this position is lacking, the faculty of the group must take time from teaching and research for keeping the software laboratory up running. We also hope that new persons might strengthen our competence within formal methods.
  5. Improve the presentation of the group at the internet (WWW). Try to attract more students.
  6. Improve visibility by giving high priority to publications, presentations and external activity. We will contact other Norwegian research groups within academia and industry that covers the field computer architecture and design. At the international level, we will improve our cooperation with groups within academia that focus on the same research themes as we do.
  7. We will try to get other researchers interested in some of the challenges that we face in our research. Examples are analysis for architecture-tailoring and hardware compilation from HDL-descriptions. On the other hand, we will seek for well defined application areas and computing needs among other IDI and NTNU projects. The first candidate is the CAGIS project.
  8. In addition the faculty of the group will have to maintain a basic research activity in order to keep informed on the long term development within its area of responsibility, the hardware/software interface and interaction. This includes areas as hardware/software codesign, evolvable hardware, adaptable computer architecture etc


6. Expectations about the future


Horizon (2000-2015)

Computer systems will be increasingly more complex both with respect to software and hardware. This combined with the demand for a shorter development period will strengthen the trend towards more and more integration of complex modules, combined with a continuing increase in the use of automated design. Central keywords are synthesis and high level specification languages.

Formal methods will be used for design of systems including both hardware and software. In addition to the functional requirements, other requirements such as speed of operation, performance, reliability, power consumption, etc., will determine the partitioning between hardware and software. The design environment will evaluate the cost implications of these requirements and represent an interactive interface for the design decisions.

The group will put more effort into formal methods, techniques and tools that we expect will be used in the industry 5-15 years ahead. We will not focus on highly theoretical methods that we do not expect will be of practical use within this time-span. Our focus will be on CAD of computer systems as a whole, and on the hardware/software interface. Other parts of IDI or NTNU covers CAD of hardware (electronics) and CAD of software.

A large part of the Norwegian computer and electronics industry is developing small and specialized solutions for typical niche markets. These solutions will often demand small pieces of special purpose hardware and integrating logic. They will seldom be realizable solely by using standard components. The current FPGA technology, and following related technologies will be crucial in this design arena.

The border between hardware and software will continue to become more and more diverse. The group sees this area as a key responsibility. In addition to a continuous updating of a broad and general competence in this field, the group will focus its research on advanced applications of some of those technologies that is judged to be central for Norwegian industry. At present, Field Programmable Gate Arrays is one such technology.

The total dependence of the society on computer systems will make fault tolerance more important. System redundancy through parallel processing and dynamic reconfigurability may come to play important roles in this context to achieve the needed level of reliability and serviceability. The fundamental limitations of metallic interconnections will continue to motivate for putting a larger part of the whole system on a single chip or a wafer.

As the prices of powerful processor chips are continuously decreasing and approaching the handling costs, use of parallel processing will be cost effective also in small size systems as workstations and PCs. Parallelism in hardware and software will become more important. The "speed-of-light-argument" will force hardware to be parallel to be able to offer the desired performance. The main challenge within parallel software will be to provide good standards for easy parallel programming without loosing too much performance.

Advanced applications such as CSCW (Computer Supported Cooperative Work) and VR (Virtual Reality) will continue to increase the requirements for high performance computing and communication. New technologies will continue to emerge, and the group will seek for effective utilization of these to meet the performance requirements.

Vision (2016-2025)

For the same reasons as outlined above, parallel processing will become even more dominating in this period. As system efficiency will be low unless most processors are active most of the time, automatic allocation of resources according to a task's demand will be mandatory. The architecture of these systems may therefore be dynamically reconfigurable parallel processor structures combined with optimizing compilers and run time systems parallelizing and analyzing a program and optimizing the parallel processor structure for running it.

In this context, the BSP model may through its semisynchronous operation offer a cost effective solution for resource-usage. A BSP computer is processing bulks of computations asynchronously on each processor node interleaved by synchronization operations. The synchronization is crucial for the overall performance, but poses very different requirements to the hardware compared to the local computations. An ideal solution would be a system where the hardware dynamically and autonomously alternates between a computing and synchronizing role.

Learning systems will be more extensively used. Neural networks is one technology in this field. Such systems are able to learn to perform complex tasks without extensive programming. This makes it possible to replace software with hardware and as hardware costs are decreasing and software costs increasing, this may prove cost effective.

The learning capability is also used in the adaptable computer architectures in order to adapt the architecture to the prevailing workload. Throughout the life of a system it will be able to adapt itself to changes in the workload and its environment and as a part of this, continuously optimizing the partition between hardware and software by use of dynamically reconfigurable hardware. The advantages of optical computing will become more dominate and a lot of new technologies and solutions will prevail.
[ Back to our homepage]

A. Publications (since 1992)

Most of these papers are available in postscript format at the following URLs:
http://www.idi.ntnu.no/IDT/grupper/DM-grp/papers.html
http://www.idi.ntnu.no/~pauline/homepg/pub.html
http://www.idi.ntnu.no/~lasse/publications.html

The list is not complete. Project and diploma works are in general not included.

1998

1997
1996

1995

1994

1993

1992

[ Back to our homepage]

Please send comments to Lasse Natvig
Last modified: Sat Aug 22 21:46:05 MET DST 1998