Complex binary data formats are sometimes unavoidable by any reasonable means. But before writing a GUI, it's wise to ask if the tricky interactive parts of your program can be segregated into one piece and the workhorse algorithms into another, with a simple command stream or application protocol connecting the two. Before devising a tricky binary format to pass data around, it's worth experimenting to see if you can make a simple textual format work and accept a little parsing overhead in return for being able to hack the data stream with general-purpose tools.
When a serialized, protocol-like interface is not natural for the application, proper Unix design is to at least organize as many of the application primitives as possible into a library with a well-defined API. This opens up the possibility that the application can be called by linkage, or that multiple interfaces can be glued on it for different tasks. We discuss these issues in detail in Chapter 7. We justified this by pointing out that policy and mechanism tend to mutate on different timescales, with policy changing much faster than mechanism. Fashions in the look and feel of GUI toolkits may come and go, but raster operations and compositing are forever.
Thus, hardwiring policy and mechanism together has two bad effects: It makes policy rigid and harder to change in response to user requirements, and it means that trying to change policy has a strong tendency to destabilize the mechanisms. On the other hand, by separating the two we make it possible to experiment with new policy without breaking mechanisms. We also make it much easier to write good tests for the mechanism policy, because it ages so quickly, often does not justify the investment. This design rule has wide application outside the GUI context.
In general, it implies that we should look for ways to separate interfaces from engines. One way to effect that separation is, for example, to write your application as a library of C service routines that are driven by an embedded scripting language, with the application flow of control written in the scripting language rather than C. A classic example of this pattern is the Emacs editor, which uses an embedded Lisp interpreter to control editing primitives written in C.
We discuss this style of design in Chapter Another way is to separate your application into cooperating front-end and back-end processes communicating through a specialized application protocol over sockets; we discuss this kind of design in Chapter 5 and Chapter 7. The front end implements policy; the back end, mechanism. The global complexity of the pair will often be far lower than that of a single-process monolith implementing the same functions, reducing your vulnerability to bugs and lowering life-cycle costs. Many pressures tend to make programs more complicated and therefore more expensive and buggy.
One such pressure is technical machismo. Programmers are bright people who are often justly proud of their ability to handle complexity and juggle abstractions.
Table of contents
Often they compete with their peers to see who can build the most intricate and beautiful complexities. Just as often, their ability to design outstrips their ability to implement and debug, and the result is expensive failure. Even more often at least in the commercial software world excessive complexity comes from project requirements that are based on the marketing fad of the month rather than the reality of what customers want or software can actually deliver.
And a vicious circle operates; the competition thinks it has to compete with chrome by adding more chrome. Pretty soon, massive bloat is the industry standard and everyone is using huge, buggy programs not even their developers can love.clublavoute.ca/dabir-sitios-de.php
The Art of Unix Programming
The only way to avoid these traps is to encourage a software culture that knows that small is beautiful, that actively resists bloat and complexity: an engineering tradition that puts a high value on simple solutions, that looks for ways to break program systems up into small cooperating pieces, and that reflexively fights attempts to gussy up programs with a lot of chrome or, even worse, to design programs around the chrome.
Allowing programs to get large hurts maintainability. Because people are reluctant to throw away the visible product of lots of work, large programs invite overinvestment in approaches that are failed or suboptimal. We'll examine the issue of the right size of software in more detail in Chapter Because debugging often occupies three-quarters or more of development time, work done early to ease debugging can be a very good investment.
A particularly effective way to ease debugging is to design for transparency and discoverability. A software system is transparent when you can look at it and immediately understand what it is doing and how. It is discoverable when it has facilities for monitoring and display of internal state so that your program not only functions well but can be seen to function well.
Designing for these qualities will have implications throughout a project. At minimum, it implies that debugging options should not be minimal afterthoughts. Rather, they should be designed in from the beginning — from the point of view that the program should be able to both demonstrate its own correctness and communicate to future developers the original developer's mental model of the problem it solves. For a program to demonstrate its own correctness, it needs to be using input and output formats sufficiently simple so that the proper relationship between valid input and correct output is easy to check.
The objective of designing for transparency and discoverability should also encourage simple interfaces that can easily be manipulated by other programs — in particular, test and monitoring harnesses and debugging scripts. Software is said to be robust when it performs well under unexpected conditions which stress the designer's assumptions, as well as under normal conditions. Most software is fragile and buggy because most programs are too complicated for a human brain to understand all at once. When you can't reason correctly about the guts of a program, you can't be sure it's correct, and you can't fix it if it's broken.
It follows that the way to make robust programs is to make their internals easy for human beings to reason about. There are two main ways to do that: transparency and simplicity. For robustness, designing in tolerance for unusual or extremely bulky inputs is also important. Bearing in mind the Rule of Composition helps; input generated by other programs is notorious for stress-testing software e. The forms involved often seem useless to humans. One very important tactic for being robust under odd inputs is to avoid having special cases in your code. Bugs often lurk in the code for handling special cases, and in the interactions among parts of the code intended to handle different special cases.
We observed above that software is transparent when you can look at it and immediately see what is going on.
It is simple when what is going on is uncomplicated enough for a human brain to reason about all the potential cases without strain. The more your programs have both of these qualities, the more robust they will be. Modularity simple parts, clean interfaces is a way to organize programs to make them simpler. There are other ways to fight for simplicity.
The Book of Five Rings for Executives: Musashi's Classic - Library
Here's another one. Even the simplest procedural logic is hard for humans to verify, but quite complex data structures are fairly easy to model and reason about. To see this, compare the expressiveness and explanatory power of a diagram of say a fifty-node pointer tree with a flowchart of a fifty-line program.
Or, compare an array initializer expressing a conversion table with an equivalent switch statement. The difference in transparency and clarity is dramatic. See Rob Pike's Rule 5.
Data is more tractable than program logic. It follows that where you see a choice between complexity in data structures and complexity in code, choose the former. More: in evolving a design, you should actively seek ways to shift complexity from code to data.
The Unix community did not originate this insight, but a lot of Unix code displays its influence. The C language's facility at manipulating pointers, in particular, has encouraged the use of dynamically-modified reference structures at all levels of coding from the kernel upward. Simple pointer chases in such structures frequently do duties that implementations in other languages would instead have to embody in more elaborate procedures.
We also cover these techniques in Chapter 9. The easiest programs to use are those that demand the least new learning from the user — or, to put it another way, the easiest programs to use are those that most effectively connect to the user's pre-existing knowledge. Therefore, avoid gratuitous novelty and excessive cleverness in interface design.
When designing an interface, model it on the interfaces of functionally similar or analogous programs with which your users are likely to be familiar. Pay attention to your expected audience. They may be end users, they may be other programmers, or they may be system administrators. What is least surprising can differ among these groups.
Pay attention to tradition. The Unix world has rather well-developed conventions about things like the format of configuration and run-control files, command-line switches, and the like. These traditions exist for a good reason: to tame the learning curve. Learn and use them. We'll cover many of these traditions in Chapter 5 and Chapter The flip side of the Rule of Least Surprise is to avoid making things superficially similar but really a little bit different.
This is extremely treacherous because the seeming familiarity raises false expectations. It's often better to make things distinctly different than to make them almost the same. One of Unix's oldest and most persistent design rules is that when a program has nothing interesting or surprising to say, it should shut up.
- The Book of Five Rings for Executives: Musashi's Classic - Library.
- El Capitán Trueno #4. ¡En la gran muralla! (Spanish Edition).
- Il contratto in terapia. Guida pratica per il primo approccio con il paziente (Italian Edition).
- Der Traum (German Edition)?
Well-behaved Unix programs do their jobs unobtrusively, with a minimum of fuss and bother. Silence is golden. On the slow printing terminals of , each line of unnecessary output was a serious drain on the user's time. That constraint is gone, but excellent reasons for terseness remain. I think that the terseness of Unix programs is a central feature of the style. When your program's output becomes another's input, it should be easy to pick out the needed bits. And for people it is a human-factors necessity — important information should not be mixed in with verbosity about internal program behavior.
If all displayed information is important, important information is easy to find. Well-designed programs treat the user's attention and concentration as a precious and limited resource, only to be claimed when necessary. We'll discuss the Rule of Silence and the reasons for it in more detail at the end of Chapter Software should be transparent in the way that it fails, as well as in normal operation.
It's best when software can cope with unexpected conditions by adapting to them, but the worst kinds of bugs are those in which the repair doesn't succeed and the problem quietly causes corruption that doesn't show up until much later. Therefore, write your software to cope with incorrect inputs and its own execution errors as gracefully as possible. But when it cannot, make it fail in a way that makes diagnosis of the problem as easy as possible. Postel was speaking of network service programs, but the underlying idea is more general.
Related Trend Watch List Extended - STeM Alert: Lou Dobbs
Copyright 2019 - All Right Reserved