|
CSE Home | About Us | Search | Contact Info |
|
Other Project Ideas
Domain Specific Processing: Recently there has been a renewed interest
in instruction set design. Specifically, instruction set extensions
for particular application domains. This includes multimedia (MMX,
3DNow), and encryption (CyrptoManiac). In this project choose an
application domain and isolate a set of instructions and datapaths
that are common to that domain (not just a single application within
it). Implement these instructions and architecture modifications in a
simulator and port a few applications.
Statistical Methods for Architecture Evaluation: Evaluating new
computer architectures is challenged by the poor performance of
cycle-by-cycle simulation. Recent work on trace sampling and
statistical simulation have shown there are other viable methods.
These methods have other trade-offs besides speed (such as accuracy).
There are many projects to be done here. You could design a new
simulator that combines statistical or other models with traditional
cycle-by-cycle simulation. Alternatively, you can explore speeding up
cycle-by-cycle simulation by reducing accuracy of certain components
(such as caches). Finally, you might explore validation of both
statistical and traditional simulators against real hardware (see the
sim-alpha work from UT Austin).
Load/Store ordering structures: If only correctness wasn't a
requirement for program execution, then we could build really fast
machines. One of the major problems with extracting significant
amounts of instruction level parallelism is the requirement to
maintain program ordering of loads and stores. There are structures
to maintain this, but they are far from scalable. Think about and
design a scalable structure for supporting speculative loads.
New Metrics: Computer architects use poorly designed metrics.
Instruction per cycle and it's corollary (execution time) are dubious
averages of program performance. Are there more meaningful metrics of
an applications performance on an architecture than its execution
time? Are there metrics of parallelism and locality that can be
measured about an application independent of any architecture? Can we
measure something about the code produced from compiler X compared to
compiler Y to make some statement about the compilers quality? What
are these new metrics, and how meaningful are they compared to the old
metrics? In this project modify a simulator to observe your new
metric and provide evidence that the metric is more meaningful than
the old metric. You might do this by implementing some architecture
change and demonstrate how the old metric masks the effects of the
change while the new metric does not.
Non-performance Related Architecture Enhancements: There is a growing
sentiment that computers are ``fast enough'' and that energy (and
dollars!) should be spent on making computers ``better'', not just
faster. What is a ``better'' microprocessor? Nominally this has been
interpreted as more reliable (wouldn't it be nice if we never saw the
blue screen of death?). Better may also be interpreted as lower power
or more programmable. Think about non-performance related changes you
might make to a microprocessor or system. Implement these changes in
a simulation infrastructure and evaluate them.
Secure Architectures: Computers are currently completely insecure.
There are (at least!) three aspects to security in computer
architecture you may explore. First, are there changes to the
architecture we can make to stop the spread of viruses and hackers?
Second, are there architectures (that will work in the long hall) that
build in mechanisms for supporting societal concepts such as copyright
protection? Third, are there architecture changes we can make that
would allow code to run securely on an entrusted third party (think
about oblivious transforms and obfuscation methods)? This is really
three separate projects, but for each design the architecture and
implement a simulator for it.
Fault Tolerant Architectures: As lithographic technology scales the
reliability of devices decreases. This has lead to a renewed
interested in fault tolerant architectures. There are currently two
schemes being examined. The first is a very old method of computing
in coded-space, with the most basic code being TMR (triple mode
redundancy). The idea is simple: just take every wire and gate and
replicate it 3 times and then periodically in the architecture
``vote'' on the outcome. The second idea is to build a checker that
re-computes the outcome of a sequence of instructions. Are there
other methods? There is a lot of work to be done in this area. You
can explore fault models for future silicon processes. You could
investigate architectures to handle faults. You can explore software
computing models that permit faults.
|
Computer Science & Engineering University of Washington Box 352350 Seattle, WA 98195-2350 (206) 543-1695 voice, (206) 543-2969 FAX [comments to aputnam] |