16.1
Advantages:
- convenience: A system with a transparent network is much easier to
use and has a simpler UI (user interface).
- saves time: Transparency saves users time by making remote login and remote file
transfer unnecessary.
- easier for the user to understand.
- easier to use remote resources: because you don't need to know
anything special to use them
- fault tolerance: a client can hide faults from users by switching
over to another server on the network
- mobility is easier: if the network is really transparent for laptop
users then this is a *tremendous* convenience, since anything on the
Internet can be used from anywhere in the world.
Disadvantages:
In general, transparency means you lose *control* and
*information*...
- Unpredictable, unexpected variations in performance, since some
actions will use the local machine, some will use nearby machines, and
some will use distant machines.
- Power users will be frustrated at their inability to control the
performance of their applications.
- Users will face frustrating and mysterious failures as invisible
machines fail.
- It's much harder to implement security and understand the security
implications of actions (for example, if users don't know where their
private files are).
- It's hard to allocate resources fairly; harder to get dedicated
resources (for example, other people might be using your machine).
- This is very hard to implement for the OS and application designers,
and hard to support for the sysadmins, because of:
- handling communication between machines
- keeping data in synch
- coping with failures
- heterogeneous platforms
16.3
-
If the system call interface is really exactly the same, you could
move the complete process image around.
If the same OS but a different architecture is involved, then it may require
interpretation or recompilation (from some intermediate, non-machine-specific, code representation)
-
If both the operating system and the architecture are different, your
best bet is probably to use a virtual machine that translates (via
interpretation or compilation) both the instructions *and* the system
calls of the application so it will run on its new host. It's not too
hard to imagine doing process migration of Java applications using
this method.
Basically, it means putting everything into a generic canonical form that can
be interpreted from any environment.
16.5
No. Many status gathering programs work from the assumption that prackets may not be received by the destination system.
These programs generally broadcast a packet and assume that at least some other systems on their network will receive the information.
For instance, a daemon on each system might broadcast the systems load average
and number of users. This information might be used for process migration target
selection. Another example is a program that determines if a remote site is both
running and accessible over the network. If it sends a query and gets no reply
it knows the system currently can not be reached.
16.7
I think it is, in general, impossible for A to tell the difference
between (a) and (b), unless there are other routes to B, or the link
is shared (like ethernet).
Case (c) can typically be distinguished from a and b in practice by
using a reliable protocol such as TCP, in which case B's kernel will
do its best to acknowledge receipt of packets, even if the application
takes a long time to respond. Even if not, you could distinguish (c)
from the others by waiting sufficiently long. The trouble is that you
don't know *how* long to wait!!
This means that it's difficult to distinguish between a host that has
failed from one that is merely unreachable or overloaded. This means:
- It will be hard to figure out what to do to fix the broken host,
since you don't know what's wrong.
- You can write the failed host off and assume it's dead, and use
another host instead (sometimes).
- But you have to keep in mind that "failed" hosts may just be
unreachable or overloaded, and thus may "reenter" the system later at
any time.
- Different entities in the network will have different ideas about
which hosts are up and which are down, and what steps need to be taken
to recover.