I'm reading the McCarthy's "Software for Your Head" - it has various "Protocols" for people to work together, often on software development teams, but knowledge work in general. I thought I would try to write a new protocol, suitable to be added to the Core.
As part of the move to start this protocol, we have a problem - probably it is a good idea to write down the problem. The state is a partial solution to the problem, expressed as a row of index cards with propositions on them, all visible.
An internal concept is often conveyed via a triple of different, and partially redundant, expressions of that concept.
One kind of expression is the informal description. This is running text presenting qualities of the concept, using natural language imprecisely, waving hands around the idea. One benefit of this kind of expression is its lightweight requirements - you can get fairly near to any idea using only the tools that every reader can be presumed to have. However, using natural language to convey very abstract ideas can be treacherous. The impression that the reader takes away can be different than the impression that the writer intended. For example, think of philosophers arguing about a long-dead philosopher "really meant". In that case, the text is still available, but the concept has not been successfully conveyed to every reader.
Another kind of expression is a list of concrete examples and counter-examples. They serve as anchors for the center of the concept, fenceposts outlining the edges of the concept, or intuition pumps to get past difficult abstractions. However, without some other kind of expression, relying purely on examples can lead to impressions that are not susceptible to introspection. "I know it when I see it." However, combining an informal description with concrete examples is one of the most precise ways ideas can be conveyed using just writing.
The third kind of expression is the formal definition. This is text that is intended to be read by both humans and computing machines. That means it is even more formal than the definitions in legal texts or most mathematics texts. The benefit of this kind of expression is its extreme precision, once you already are very near to the idea. However, if you are not already near to the idea, the formality can make it opaque.
In programming, the informal description is in the design document or the comments, or (arguably) the symbolic names in the code. Sometimes the informal description is created on-the-fly at a whiteboard, and there is little evidence of its ever having existed. The concrete examples and counter-examples are in unit tests, or sometimes comments. The formal definition is the source code itself.
A lot of popular software development text has been written about software reusability, about the units of reuse, whether they are functions, methods, objects, libraries, components, processes, or services. Someone who has read these books and articles might be surprised that the design virtue of reusability is not actually reuse.
Aside: This naturally leads to a separate question: "why, if the design virtue of reusability is not reuse, is there buzz about it?". I am not going to try to address that question here; the point is that reusability is a familiar concern.
The actual point of aiming for reusability is correct decoupling. The problem with the word "decoupling" is that it indicates a direction (less coupling) and we actually want a destination (correct size of units). It is rare, but possible, to be on the wrong side of that destination. For example, when programmers first learn techniques for decoupling such as polymorphism or service-oriented architecture, they overuse those techniques, cutting their code into fragments so small that nobody would want to reuse that fragment - after the costs of integrating with it, it wouldn't be worth it.
The right size of units is conveyed by a test: Hypothetically, can you imagine reusing this unit in a substantially different (but still realistic) context? Would it be valuable in that context? Could you cut it free from its current context in order to reuse it with little difficulty?
The usual error is too much coupling. Suppose you take a codebase and make a user-visible alteration to it and make the change in the most efficient way (the smallest possible change that achieves the desired effect). You probably have increased the coupling.
One way to understand this phenomenon is by analogy to needle felting. In needle felting, one starts with a clump of fibers. Then you stab the clump with a barbed needle. The barbs catch the fibers and tangle them together, selectively along the path that the needle followed. This allows you to control the shape of the final object. If you were making a teddy bear, you might stab the neck many times - in order to compress that part tightly.
In this analogy, the codebase corresponds to the clump of fibers that we want to shape into a teddy bear. The user-visible alteration corresponds tightening the surface of the clump where it is a bit too loose. The programmer making the smallest possible change corresponds to the barbed needle. The inevitable tangling effect as the codebase is stabbed over and over and over is because the different changes are uncoordinated with the internals of the software.
Back to the main point: without efforts to the contrary, codebases drift toward more and more coupling. Reusability is a design virtue, which has little or nothing to do with the actual practice of reuse. The point is to guide programmers in deciding which additional "not strictly necessary for the near-term goal" changes to make. After making the "smallest change", programmers should refactor until the code that they touched is reusable. This may mean introducing, splitting or joining reusable components.
A tickler file is a drawer of date-labeled folders. The dates are days, weeks or months in the future, and each folder holds memos to your future self. It acts as an external paper memory, allowing busy people to temporarily forget some of their worries. For example, if a project that will take several weeks has been delegated to a subordinate, then you might put "follow up with so-and-so about such-and-such" into the two-weeks-from-now folder.
Computers use priority queues for the same purposes that humans do: to postpone considering tasks.
Operating systems usually contain a priority queue for their applications. A music player application might load a short span of music into the audio hardware's buffer, and then put a memo to wake the music player into the priority queue indexed under "things to do one tenth of a second from now".
Problem solving algorithms keep partial solutions in priority queues. In this case, rather than time, they're indexed by their estimated probability of leading to a solution. Then the algorithm can take the best-looking partial solution off the front of the queue, extend it towards a complete solution in one or more ways, and put the new partial solutions back into the appropriate places in the queue. This postpones considering initially-bad-looking moves until after all the better-looking moves have been exhausted.
Priority queues support two primary operations - you need to be able to insert new things at a particular priority, and you need to be able to get out the frontmost item stored in the queue. Different variants may support other operations, such as deleting items, or merging two priority queues.