Notes on "Raising the Level of Abstraction:
A Signal Processing System Design Course"
Prof. Brian L. Evans
- Title slide: This graduate class has been taught three
times at UT Austin.
It has roots in the
Specification
and Modeling of Reactive Real-Time Systems graduate class which
was taught once at UC Berkeley in Fall 1996 by
Prof. Edward Lee.
(Seals should have been in color.)
- Introduction: The first three points address the issues in teaching
signal processing systems. So much heterogeneity and complexity.
- Moore's law: exponential increase in functionality on a single chip.
What was a board-level design 6-8 years ago can be placed on a
single chip. However, integrating a board of components onto
a single chip is difficulty. (This is called systems on a chip.)
- Number of processors has been increasing in desktop workstations
and next-generation systems. Second-generation cell phone digital
subsystem was initially 1-2 DSPs and a microcontroller and now all
done on 1 DSP. Third-generation digital subsystems will need a power
efficient general-purpose processor (such as the ARM) and DSP (C6x)
- A traditional signal processing course focuses on one type of
algorithm (e.g. speech or image) and one type of implementation
(e.g. Matlab or DSP).
- This new course raises the level of abstraction to cover multiple
styles of algorithms and implementation technologies by decoupling
system specification (algorithm-level) from implementation.
- Embedded Signal Processing Systems: Familiar graphical description
of the software and hardware technologies in embedded sytems. This
diagram could be a third-generation cell phone if you replace the
microcontroller with a low-power general-purpose processor such as the
ARM. TI TMS320CC6x
in third-generation wireless systems
- Heterogenity in a System-Level Design Flow: This demonstrates
the decoupling of system specification and implementation. The choice
of specification model greatly impacts the candidate implementations.
An imperative model restricts you to software (e.g. C) that has to
be compiled. A discrete-event model of combinational logic leads
to a hardware implementation. However, finite state machines and
dataflow models are amenable to both software and hardware.
- System Modeling:
In this slide, I make the connection between subsystems familiar to
the audience with various model of computation. Composing models of
computation yields complex systems.
- Specification Using Hierarchical Block Diagrams: An example of
composing models of computation. This kind of composition could
model a second-generation cell phone. The discrete-event model
models events (such as call request) in continuous time. The FSM
models the protocol when a call has been accepted (setup and
speech transmission/reception modes). During reception, the
compressed speech stream of data is decoded, which is modeled well
as a dataflow model.
- Simulation and Sythesis: The key here is that models with finite
state are preferred over models with infinite state in terms
of simulation time and optimal scheduling time. The optimal
scheduling for SDF is for scheduling onto a single processor.
The problem is in general NP hard (not polynomial).
- Educational Objectives: Self-explanatory. Slide is mostly text.
- Conclusion: Self-explanatory. Slide is mostly text.
Last updated 06/18/99.