1. Parnas I.

When modularizing a system, Parnas proposes that one should begin by listing

the "difficult design decisions or design decisions that are likely to

change." I fully agree with this, as I have personally developed systems

where I used the "flowchart" method to modularize them. Even though this

worked out well on the first go round, once the system needed revision due

the changes in either the input format or due to the need to include

additional input data, I had to go back and rewrite almost the entire

package. This resulted in not only additional development time that could

have been minimized if the system was developed the very first time using

Parnas' criteria, but it also resulted in additional resource consumption

further down in the product development chain (testing, training

documentation, etc.).

Another interesting point that Parnas makes in this paper is regarding the

conceptual modularization and the actual execution of sequences at run time.

They do not have to correspond to one another. This makes very good sense

because the optimization criteria in both the cases should be different. At

the design or the conceptual level, we need to modularize according to a

method that will result in the least cost during the products entire life.

At the machine code level, however, we only have a much narrower aim, to

make the whole system perform as optimally as possible (including speed, of

course). Therefore, we should not impose our conceptual model on during the

actual execution nor vice versa.

********

2. Parnas II.

This publication can be seen as an extension of the paper 1. The eight

points that Parnas mentions in the conclusion are all very relevant to

software system design. From my experience as an amateur software engineer

and as professional software user, points 1, 3, 4, 5, 7 and 8 draw special

attention.

I think that most software designers do not give enough consideration to

identifying the minimal subset of a system. This results in a larger

skeleton of legacy systems than there otherwise would be. Also, along a

similar path, I think, software engineers tend not to emphasize generality

and flexibility. Most systems are both to narrow and too inflexible. As

far as duplication is concerned, this is perhaps a smaller issue today than

ever before due to the cheaper and better memory available. This actually

results in us lapsing when it comes to designing space efficient software

systems. I think extension at SYSGEN or extension at runtime is a choice

that should be made when it comes time to decide how to efficiently

implement a run-time "version" of the designed system. However, I don't

think we should totally ignore this during the design phase, as during the

design phase, we should be working on making sure that the designed system

is going to be the most optimum for the entire development process. And the

development process does certainly include run-time considerations. The

last point, on the value of a model, is almost trivial. However, the

specific model that Parnas observes is very enlightening.

********

3. Sullivan and Notkin.

Mediators are presented as a tool that could help us environment integrated

systems while increasing component independence. In short, the mediators

are the components that result from "componentization" of relations. I

think this is a very important tool that could aid in system modularization.

If a system could be improved with the use of mediators, the resulting

benefits will be realized during the evolution of the system, including

changes in the requirements.

I think it would be interesting to see how a software system like NetMeeting

could benefit by making use of mediators.

********

4. Kiczales.

Open implementation is a great concept. It allows the clients to make the

appropriate choices, especially with meta-interfaces, that will result in

the most efficient systems. By using meta-interfaces, the clients can

choose which method is the best for he/she to access the subsystems. This

effectively increases the number of subsystems at the software designers

disposal.

Among the many challenges that are listed by Kiczales, the first three

require cooperation among the software community and can be only universally

implemented through standards bodies. This, of course, has disadvantage of

different parties pushing for different open implementation standards

depending upon their existing platforms. Another disadvantage might be that

anytime we standardize anything, especially software, there is always a

chance that creativity will be sacrificed. And, with creativity, we might

miss solutions that could actually be more optimal than the "standard"

solution.

********

5. Garlan and Shaw.

Software architectures described by Garlan and Shaw are a good starting

point for software engineers. I think we all should be familiar with the

"common" architectural styles. However, as we (including Garlan and Shaw)

know, no practical system is likely to be accurately characterized by one of

these common styles. The last section in the common styles part of the

paper does touch on this point and was the most valuable for me.

When designing a system, having a good understanding of the major

architectural styles would greatly aid in modularizing a system according to

the criteria proposed by Parnas. It would be interesting to take an example

system and modularize it using various design optimization goals and then

attempt to implement each of these designs using the common architectural

styles. And, finally, implement the "best" design using a heterogeneous

architecture.

********

6. Gamma, Helm, Johnson, and Vlissides.

Design reuse should be given due consideration, especially when large

projects are undertaken where the benefits of reuse are likely to be very

significant. Design patterns are a way for us to do this in a more

organized fashion. The classification scheme that Gamma et al. propose is a

very helpful tool in adding new design patterns to the catalog that they

already have compiled. One thing in particular that the authors warn us

against, though, is that we should make sure that we do not employ design

patterns indiscriminately.

I think one of the reasons that software engineers in industry do not take

advantage of design patterns, and other such productivity enhancing insights

from the academia, is that there are not that many good communication

channels between the two groups. The other reason is that many times the

academic researchers do not make any real effort in applying the newly

developed / discovered techniques to "real world" problems. Gamma et al.'s

catalog and classification's requirement of having the design pattern be

used in two or more application domains is a very good one and works towards

bridging the gap between "theory" and "practice." Gamma et al. also do a

good of choosing to identify and classify the design patterns for expressing

objected oriented design. Since the industry, at the present, is very much

infatuated (for better, for worse) with OO, Gamma et al.'s research is quite

relevant.

********

7. Johnson.

Frameworks are a tool that the industry uses quite a bit. However, as

Johnson points out, not too many researchers have discussed this very useful

topic. Johnson's approach of examining frameworks by comparing it against

other reuse techniques is especially helpful. This aids us in choosing the

optimum method for reusing objected oriented techniques.

One thing Johnson does that is very valuable from a commercial software

engineer's perspective is provide some very elementary insight on how to

use, develop and learn frameworks. This is critical because many times

techniques that have much potential of being useful are not employed due to

lack of training. If I had to pick one reason for the divide between the

design techniques employed by practicing software engineers and the

techniques touted as being the "best" by the researchers, it would be that

the practitioners simply are not even aware of the best practices. Johnson

addresses this by not only presenting frameworks but by also explaining how

to use, develop and learn them.

In this case, however, the academia (Johnson) has done a good job of

examining a technique that is widely used in industry and has made some

observations that provide us with a more organized approach to frameworks.

This is an instance of academia learning from industry!

Frameworks provide a medium for expressing design reuse using notation that

even regular "programmers" are familiar with, namely, code. I think the

average software engineer is not only much more comfortable, but is much

more willing to make use of "code" than "some esoteric (academic) design

technique." Perhaps, this is why frameworks are used in industry so much!

********

8. Sullivan and Knight.

When it comes to design reuse, large-scale projects are where the payoffs

are likely to be the biggest. Sullivan and Knight's attempt to use the

Microsoft OLE component integration architecture is a step in the right

direction. We see from the example, though, that there is still ways to go

before we have a user-friendly, and more or less universal, component reuse

in large-scale, or any, software.

One of the main issues in achieving widespread design component reuse is

that there has to be widespread agreement on the interfaces and other

interaction points as well as on implementation. It is interesting to note

that component reuse is more likely if open implementation was more of a

reality. Interestingly, also, they both require cooperation of the software

community at large in specifying the standard. This, for good and bad, is

very difficult in software. This is not surprising because software

development community is one of the most creative industries of all and we

all know that it is very difficult to get really creative (and greedy)

people to agree on anything. (Software engineers, truly enough, not only

act like the folks stereotypically known as creative, the artists, but we

even dress like the artists!)