(comment-out "Stephen Compall's blog")

I'll add commenting when I get around to it. In the meantime, send all comments to scompall, then an "at" sign, then nocandysw.com.

Long-delayed project update

Thu, 20 Sep 2007 06:00:04 +0000 by Stephen Compall

Haven't updated in a while, I even let CLISP (which I use to build this page) break for a while because I'm not using it in any current projects.

One such project is something for GNU Smalltalk called Backstage. To be brief, the idea is to bring together Lisp-style image-based development and web applications, by way of a variant of the concept of literate programming, which I will call “narrative programming” for now. Of that project, I have only released a package to generally abstract out common patterns in Smalltalk code called Presource. If you have Smalltalk 2.95c or later, try out the package.

I have currently pushed Backstage for another project, ¤help. I needed an excuse to build an application using Java Servlets, JSP, and Struts, so I came up with a distributive task management tool for project teams using currency as an incentive. The README file explains things from an economist's perspective, and is quite extensive. The manual, which is partly written, has a better explanation for non-economists, but I have not released any portion of ¤help other than the README.

I am certain that my ¤help experience will aid me in choosing the proper thought model to support with the web development layer in Backstage.

Finally, today I spent some time reading the Texinfo source, and found a tiny memory leak.

The rigor of computer theorem-proving

Thu, 14 Jan 2007 17:41:00 +0000 by Stephen Compall

The WSJ's page B1, column 1 is a weekly source of entertainment and misunderstanding of technology. Lee Gomes, author of the Portals column on Wednesdays, is a particularly egregious offender, entertaining real techies weekly with his "unorthodox" ideas on what computers are really all about.

It was the Science Journal that caught my attention yesterday, however.

There is nothing of a higher order in relying on computer programs for mathematical proofs, that would require the computer-generated "proofs" to be manually checked to be fully trusted. Computer theorem-proving is merely a way to rely on the theory of deterministic computation in constructing a manmade proof, as a mathematician might rely on any other formal theory in a proof.

With the theory of computation in hand, the mathematician creates a "program", in this field meaning an algorithm together with a set of input data. These are often expressed in a symbolic language readable by both human and machine. Now, because all algorithmic questions are reducible to the question of whether a program will complete, the problem is defined as one of completeness:

  1. Prove that this program will complete if and only if the theorem is correct.
  2. Prove that this program will complete.

The first is a matter of explanation in the usual language of a proof. It cannot generally be done by a computer; if it is, then the responsible program must be proved in turn to generate a complete theorem-prover, and so on. This is the handwritten content of the four-color theorem proof, for example.

The second is a far more difficult matter in many cases, and this difficultly is usually the justification for relying on machine-readable algorithms. Theoretically speaking, running the program is unimportant; a 10,000-page "proof" is meaningless in any sense, because it does not constitute part of the theorem proof; they are merely cited as evidence that the program completed when run on a real machine.

If she proves that the program will complete without relying on a machine, the resulting theorem-proof is as rigorous as any. I stress again that running the program, or the output proof it gives, are unnecessary if she has proved that the program will complete. The real reason, and only general reason, to distrust this proof is not the long, winding, human-unreadable output "proof", but the hardware and software platforms on which the program runs.

While the mathematician has proved the correctness of the program, she has not proved the correctness of the symbolic language translator used to transform the program to machine-runnable code, nor of the hardware on which the program is ultimately executed to prove that it completes. These systems may be widely used in mission-critical applications, subject to rigorous testing, and conjectured to be correct, they are not, so to speak, proven.

Rob Levin is dead

Sat, 16 Sep 2006 23:45:45 +0000 by Stephen Compall

Richard Stallman may be more famous, and he certainly inspires strong feelings in both his supporters and detractors, but those feelings are not quite as strong as those inspired by Rob "lilo" Levin, founder of the Freenode IRC network.

IRC is traditionally only useful for silly conversation; the worthwhile is rarely said. Freenode (originally Open Projects Net) is different. Lilo encouraged the Free Software community to form discussion channels on Freenode, and it became a different kind of IRC experience, one that is actually worthwhile.

This is impressive for the sort of strong idiosyncratic culture attached to IRC on the Internet. Lilo wanted Freenode discussions to have relevance in the real world; he suppressed the drama that comes with the traditional server-donation structure of IRC, and asked users to care about the network as a contiguous entity.

As part of this process, he asked for monetary donations to his non-profit company, PDPC, to help keep the network running, and to pay him a $16,000 salary to work full-time on the network. This would be entirely reasonable in other contexts, but apparently horribly offended some people with roots in the IRC structure found on other networks. Several people founded "forks" of Freenode with the traditional structure; only one survives with much popularity.

I never really understood the objections to Lilo's methods; perhaps this is because Freenode was my first exposure to IRC. Perhaps because it is the only place on the Internet I have found, other than on project mailing lists, on which I can have serious discussions about coding issues. Maybe it's the time I spent talking with the Freematrix Radio showrunners, many of whom are Freenode staff. On the other hand, it might be that on the few occasions on which I spoke to Lilo, he always seemed kind, earnest, and a true believer in projects to help others.

Legislation is a blunt instrument

Thu, 25 May 2006 17:00:15 +0000 by Stephen Compall

Allow me to elaborate a little more on my refutation of Congress's planned "price gouging" legislation.

When solving a problem through law, we rarely consider the effect of the law in practice, and only whether the language of the bill matches what we would get were we to write a bill to supposedly solve a supposed problem legislatively.

We rarely consider the unintended consequences, and even more importantly, the cumulative effect of all those unintended consequences, which mostly seem to push in one direction, reifying an awkward behemoth with no way of seeing what it tramples in its charge ever-forward by command of the state.

Perhaps the most dangerous conceit is the assumption that government is a perfectly efficient and effective means with which to accomplish any goal we would like to label as "public". This is the chief conceit of the social democrats of Europe, the supporters of bigger government in both major parties in the U.S., and all political creeds throughout history devoted to the expansion of the state in supposedly beneficient ways.

So perhaps it is a useful example to return to grounds on which the behemoth has played before; we can still see the hoofprints in the dirt. Let us recall the rule of supply and demand in creating prices, in series of interactions so complex and chaotic (in the scientific meaning of the term) that it would be computationally infeasible to model them. In 1973, OPEC imposed an oil embargo against the First world, in response to its support for Israel during its conflict with Egypt and Syria. The First world, especially the United States, had developed a comfortable dependence upon the cheap energy supplied by the Middle East's vast supplies of crude. As such, it was certainly a shock to be denied this source, and the requisite price increases a shock as well.

The natural effect of price changes is to better match supply and demand. In the recent mini-crisis, these prices did their job—moderated usage, preventing a shortage, and provided incentive for refiners to bring capacity back online quickly. Indeed, the speed with which capacity returned was remarkable, considering the damage, and would be unthinkable in a state-controlled firm. Yet, far from respecting this heroically capitalist activity, Big Energy was blamed for everything.

During the oil embargo, President Nixon authorized full government control of prices, which was used to artifically depress them. The result, as I have previously explained, would be a shortage; therefore, usage was also restricted. Drivers were bisected into two groups that were only allowed to purchase petrol on "their days". This was less than effective, and rampant shortages abounded indeed.

Government controls exacerbated the scarcity that had prompted their emplacement. How did the U.S. government forget to consider economic theory already proven empirically, entering the mainstream just as the nation was born, and the foundation of its unparalleled success?

I do not doubt the good intentions of Senator Maria Cantwell, nor those of the legislature in the time of the oil embargo. I doubt their competence.

Therefore, in considering new regulation, let us try to understand what all of the consequences of regulation are, not least the destruction of the price signal system in our subject of consideration. In each new law, we must consider both positive and negative effects; not only the content of the bill, but its likely impact.

What would be the positive consequences of enacting this "price gouging" legislation? Well, we could prevent some price rises for which there may not be economic justification. On the other hand, there may be justification, in which case we risk creating a shortage. We also signal once again our willingness to interfere with the balance of our economic system in order to buy votes, thereby focusing even more lobbying attention from all sides of any issue on a Congress that clearly cannot handle it.

To me, the negative consequences far outweigh the positive.

Gas-price gouging said to be scant after Katrina hit

Wed, 24 May 2006 02:26:47 +0000 by Stephen Compall

See article for more information.

As is commonly the case when a voice of reason calls for a nuanced understanding of the facts behind a politically charged issue, each side is declaring victory following the release of the FTC's report on the causes of last year's petrol price spike, an issue I discussed earlier in my 5 October 2005 entry.

No one is immune; the original Wall Street Journal title of the article from which I take my information washed over the nuance, which was that there was indeed a little gouging, at least by the FTC's definition of it. Even before we have a chance to declare Mr Conkey (the article's author) guilty of oversimplification, we have the National Petrochemical & Refiner's Association saying this "appears to vindicate" refiners; on the other hand, we have Sen Maria Cantwell (D, WA) and others among the foul-criers declaring the evidence of price-gouging outlined in the report as a reason to pass a federal anti-gouging law.

Let us consider once more exactly what the FTC is saying. Yes, the price jumps were "approximately" (from the report) due to market forces. Yes, there were 15 instances (among refiners, dealers, and stations) in which price increases were not completely due to market forces. Unfortunately, this complete understanding is inconvenient to both sides of this particular argument, so it is ignored.

First, the NP&RA et al should admit to the few instances of gouging cited in the report, and instead emphasize that these sporadic instances could hardly be called an indictment by any reasonable person, unless she has other designs on the industry. Those who grilled oil execs in Congressional testimony should admit that their accusations were over the mark by an incredible degree, and instead press their case in this new framework of understanding.

So it is here. I was right for the most part, unless you'd say that the FTC is merely covering up the industry's misdeeds, in which case you might as well proceed to declare me in the same category in true ad hominem fashion. The exceptions here prove the rule; were gouging by the official definition widespread, it would be difficult to see where the fair price-rises were anyway. Now let us consider Sen Cantwell's case for her, because she is not likely to do it herself.

The rationale for a law making price-gouging a federal crime must now be justified on what remains of the formerly vast conspiracy to "fill [Big Energy] bank tanks", as Harry Reid (D, NV), my favorite Sophist in the Senate, put it last month. There doesn't seem to be a pattern of abuse; the cases were isolated, and all but one at least partly attributable to particular circumstances of the seller in question outside her control.

The two Congressional proposals for a federal anti-gouging law both intend to make it a federal crime; therefore, the goal must presumably be to discourage the few instances that do happen. For this to work, however, there must be a clear definition—"don't cross this line".

Let us consider the two proposed definitions. The House bill would like the FTC to define gouging. That is fine, but is the FTC up to the task? They already have a definition, and some of the gouging cases mentioned in the report were actually due to geography. Are we then to press criminal charges on companies because they were operating in a particular location? More importantly, perhaps, why was geography not considered among the FTC's not-gouging justifications? I speculate that it is because they cannot do it without leaving some middle ground, where it is not clear whether gouging is taking place or not, and such an understanding would not arrive even with all possible information about the case. In the environment of such a large area of unclarity, we can expect neither to discourage whatever it is we are trying to discourage before the fact, nor prosecute it with evidence "beyond a reasonable doubt" afterward.

Perhaps Sen Cantwell can clean up this mess for us. She says that gouging is "unconcionably excessive" pricing. This presents obvious prosecutorial problems. In the interest of completeness, let us anyway examine a hypothetical case of "unconcionably excessive" (henceforth UE) pricing.

As Sen Cantwell must surely now admit, large increases in commodities are usually due to changing supply/demand factors. However, her language does not exclude those, and as they are the most common kind of large, even "unconcionable" price increases, it is highly likely that a randomly selected UE pricing case is just such a one. So it is with this case. A natural disaster has spiked the price of food by an average of 100%. Low-income folk spend on average greater proportions of their income on food than the other sort, so we appear to be starving the paycheck-to-paycheck crowd. Unconcionable, even UE! However, as I said in my earlier entry, demand puts a floor on this price. If we were to reduce the price, the chief result would be a food shortage. Hmm, we appear to be starving someone here, and as the non-low-income sort can stockpile faster (and transfer supply to the black market faster, as it shall be), it appears to be my favorite rhetorical device besides children, the poor, again. Children of the poor is even better: we are starving the children of the poor with unconcionably low (henceforth UL) prices!

Unfortunately, Sen Cantwell has chosen not to provide us with a legislative tool with which to prevent UL pricing. Besides, it is a thin line; the natural price is UE, and anything lower is UL. It appears that our unfortunate food sellers are now criminals however they would like to operate; they will end up either selling to the black market or selling on it, and perhaps both.

Now perhaps the good Senator would like to show us her great legislative plan for eliminating black markets. Also, we could have some kind of stamp system, whereby you're only allowed to purchase so much of a given good. Oh, stamps can be bought and sold, so I suppose it would have to be coupled with a standard citizen authentication system. Once she's solved all this, perhaps we can work on allowing such an incredibly vague metric to enter the U.S. Code in the first place. I think very few of us would be keen on allowing the government to determine what is "unconcionable" and not, case by case, nor do I think such a great power even counts as regulation of interstate commerce.

Belief, or lack thereof

Mon, 22 May 2006 04:54:15 +0000 by Stephen Compall

A little tip for those who are worrying about their seeming—dare I say it—skepticism about the religious beliefs under which their parents brought them up: stop being such a wimp.

Let us take any of the Abrahamic religions for example, be it Christianity, Islam, or Judaism. Would you agree that "belief in god" is a necessary prerequisite for being a member of one of these religions? Would it therefore follow that, lacking such a belief, you would not be such a member?

Now, let us take the word "belief". In the sense of these religions, belief is strong; it is like knowing so well that you would reject any opposing evidence against, and excuse the lack of evidence in favor. My audience for this short movement (heretofore "you") is those who say they are not atheists because they "don't know". What do you mean, you don't know whether there is a god? Clearly, you lack a belief in a god.

Then you say that you "don't know whether you believe". That doesn't mean anything! Belief is completely described as a mental state; you can't "not know", because belief is whatever you know you believe. If you don't know whether you believe something, you clearly lack the belief in question, unless you choose otherwise.

I don't want to hear any more of this agnosticism middle-way nonsense. Be a religionist or not. Your time to choose is running out.

When a definite yes-or-no won't do

Mon, 13 Mar 2006 07:19:38 +0000 by Stephen Compall

With some advice in hand, I decided to update the SBCL patch previously described in "Chain reaction".

There are three places in fd-stream.lisp in which similar calls to select(2) are constructed. The call is always read-time conditionalized with #-win32, and the #+win32 equivalent always does something similar, but slightly different.

So, in the spirit of symbolic documentation I have tried to adopt in my code, I decided to move this rather messy duplicated code into a single function. I selected a name that would reflect the very nature of select(2) that had caused the mistake in the first place: sysread-would-block-p. It should be clear now that the particular select(2) acrobatics in use answer whether a file-descriptor would block, not whether any characters were available.

I don't really know anything about Win32 programming. Nevertheless, I provided a #+win32 version of sysread-would-block-p because refill-buffer/fd used the particular implementation I selected in the exact same semantic way.

This became a crisis, however. I studied it for a couple of minutes and discovered that the Win32 version would, in fact, answer whether characters were immediately available. The difference is at the heart of my original complaint, and is that at end-of-file, select(2) would answer "won't block", while sb!win32:fd-listen would answer "no characters available", in direct contradiction of their relative semantics in all other cases.

So, I considered what refill-buffer/fd really wanted from its slightly inconsistent, but nevertheless adequate, calls to select(2) and fd-listen. I determined that the best way was to waffle about the new function's meaning and call it sysread-may-block-p. All that rb/f really wanted to know was whether it could continue with the "sysread" operation, and I was thus able to introduce a now code-reducing factoring patch, without confusing any more issues than are already confused.

On another note, Spring Break Common Lisp was a great success. I got sick towards the end and was then unable to think hard enough to do anything worthwhile, but that's all over now. Spring Break is now over, but I'm sure it's Spring Break somewhere.

The comments are in German

Thu, 09 Mar 2006 04:56:31 +0000 by Stephen Compall

After reading Gary King's entries (latest) on cl:delete not taking advantage of the adjustability of some arrays on CLISP, LispWorks, and Allegro, I decided to look into providing this enhancement in the CLISP source, as a first foray into modifying the CLISP sources, having already made such a foray into SBCL's sources. (See my previous blog "Chain Reaction".)

Reading CLISP is terrifying at first, because much of the copious commentary is in German. However, I found that once I got past the anxiety presented by the rather dense code and comments that only halfway make sense once passed through Google translate, it is not that difficult to see what is going on.

The code itself is very low-level, being based on explicit stack manipulation. Deeply-nested expressions of the sort I like in Lisp code are very rare. Therefore, once you understand the stack operations, you can [relatively] easily follow what is going on.

Anyway, about the change: I determined that adjust-array always allocates a new block, even for adjustable arrays. At that point, it's just as well to allocate a whole new vector, as the GC will do just about as well in either case. For those interested anyway, however: around line 2515 in sequence.d is the start of a long special case for vectors with fill pointers. Almost all of that code should be refactored into a function, so that an almost identical special case for adjustable arrays can be added; the only change that needs to be made is near line 2561, replacing the call to set the fill pointer there with a call to adjust array, modulo the appropriate stack operations to set up the call.

To break this psychological block and satisfy my curiosity, I sacrificed what little time I had to work on nocandy-web today. So no new coding adventures. I think the CLISP sources make for a more interesting tale than the "fcgiapp.c" sources, anyway.

One, two, you're done

Wed, 08 Mar 2006 04:18:02 +0000 by Stephen Compall

Though I take care to avoid the performance obsessions of the many programmers still stuck on C++, not even doing OO to work around the language's shortcomings "because virtual method calls are slow", I still get stuck in the performance rut occasionally. Today I took another careful step.

(defun gtkname=? (gtkname other)
  (and (every #'eq gtkname other)
       ;;this is evil but whatever
       (= (length gtkname) (length other))))

So you know the constraints, I don't care about the sequence type (they may and will be different), in fact one will usually be a vector and the other a list; I only care that every element is equal and they are the same length.

Of course, for the list this may unnecessarily chase cdrs, which is why you won't see this in performance-tuned code. But this particular function will very rarely be used after the FASL containing it is loaded into the image, so who cares?

Maybe there's hope for me yet.

The Visitor pattern is silly

Tue, 07 Mar 2006 04:09:02 +0000 by Stephen Compall

I wrote the parser and AST-builder for my compiler for a toy C-like language called "C-" a couple weeks ago by first writing the parser-only form and making sure it worked reasonably well, then writing a wrapping macro that would generate appropriate class definitions and augment the LALR(1) productions with actions that would instantiate instances to act as the parse tree cells. [1]

The resultant code was longer than I thought it would be, being a ~100SLOC macro and ~80SLOC function (just for processing a single rule!). It also had the most ugly use of cl:loop I have ever seen. So I spent this afternoon refactoring it. It is almost the same length, but the loop is gone, mostly replaced with uses of nocandy-util:rlet where convenient. This also replaced the more egregious uses of cllib:with-collect.

With little time left to code, I turned to the semantic analyzer. In a message-based OO language, you would implement the Visitor pattern. Visitor is effectively the process of manually hacking multiple-dispatch into a message passing style. That is, every time you want to descend a node, you end up with something like this:

!SemanticAnalyzer methodsFor: 'visiting cells'!
visitExpressionList: cell
    cell expressions do: [:each| each visiting: self].
    "probably some other things to do"
!
visitAssignmentExpression: cell "do some stuff with the assignment" ! !
!ASTAssignmentExpression methodsFor: 'semantic analysis'! visiting: visitor visitor visitAssignmentExpression: self. ! !

The reason you go through this nonsense, of course, is so that you don't have to type-case every single tree walker you write (of which there will likely be more than one in a compiler). This still isn't good enough to ensure sanity, however; see Rhys's description.

Now, I wrote this whole code generator to avoid writing 60 defclass forms (with requisite slots) by hand, and I'm certainly not going to write all those #visiting: methods. Fortunately, CL isn't a pure language; we can easily write methods that are selected based on the types of more than one argument.

After deciding on this, I realized that, after all, CL isn't a pure language, and it doesn't hold you to ideas like classes "owning" methods. Furthermore, because the semantic analyzer (SemanticAnalyzer) is going to be the same for a cohesive set of visits, deeply nested though they may be. Therefore, why not stick the analyzer object in a dynamic variable, select the tree-walker based on the particular "visitor" being used and nothing more, and forgo the "self" argument?

This I did, complete with a macro giving a single name to descending into a tree cell for any visitor state object, and convenient symbolic access to the cell's slots (and probably the visitor state's slots, too, later). Thus I may next code semantic analysis rather than fill pattern templates to fulfill my obligations to an algorithmic seesaw.

[1] I don't use the term "node"; it is too confusing, because it can mean just about anything. When I say "cell" I know exactly what I'm talking about, even if you don't.

Pattern matching for the underprivileged

Mon, 06 Mar 2006 04:24:50 +0000 by Stephen Compall

It seems like some days, I'm just reducing everything to a matter of language parsing. Everything else seems trivial.

In particular, I've defined a Lisp syntax to convert an n-ary tree at compile-time into a closure structure that takes a generator and walks down the tree as far as one may go. Specifically, I'm using it to implement longest-begins-with-case, which finds the longest literal prefix for a given list.

The interesting part of this will be giving the elements of the specifiable prefixes a normal evaluation rule, as opposed to the unevaluated symbols and things of cl:case and such.

Why not use a real parser for this sort of thing? It's one of those "reinvent-the-wheel" things. It started when I realized that, while popping off a single element from a list in the ad-hoc parser for gtksrv.lisp would work, there were a few special cases in which it wouldn't. Then, I decided that I should have something more general, so these special cases would become part of the general algorithm. I can only hope that the code is sufficiently general so that I am never tempted to do something so foolish again.

Separately, nocandy-util 0.3 (released a few days ago) includes something neat in memory.lisp that can be used to improve many ad-hoc search algorithms one might write, including a couple in LC3r, the virtual machine for learning to be initially released at the end of this month. I call them "weak obarrays"; they are effectively hash tables and a little bit of functional sugar to provide the "intern whatever you want" feature at the heart of this particular optimization.

The neat Lisp-abusing part of weak obarrays is that they can use load-time-value for most interesting kinds of search macros you might define; however, the thing you're interning must have a normal evaluation rule! Oh, for a world in which make-load-form would let you hack in anything.

Chain reaction

Sun, 05 Mar 2006 20:33:48 +0000 by Stephen Compall

Two days ago, I placed the last domino in a short but interesting sequence of events. I have yet to see it collapse, but the results will be very pleasing. To explain:

In nocandy-util there are a couple of patches for SLIME. One implements the :fd-handler communication style in CLISP. The other fixes a bug in process-available-input on all platforms that relies on incorrect behavior in SBCL's cl:listen, thereby breaking my implementation of the fd-handler-related backend functions.

As the bugs in SBCL and SLIME effectively hid each other, the SLIME maintainers reported being unable to reproduce the behavior stated in my report. Fair enough, especially without any official implementations that depend on the correct listen behavior in the SLIME porting backend.

So, I turned to SBCL. The problem, as I discovered through a little bit of testing, was that listen was saying that there was data available at end-of-file. Though I couldn't see the entire picture, not being an SBCL aficionado, I deduced that though it's not entirely clear why listen seems to work right most of the time on regular files, though I'd guess that it's because you can immediately buffer a regular file all the way to end-of-file, but there is usually a delay between effective EOF and close-triggered EOF with sockets.

Regardless, it was pretty clear that the particular part of listen I narrowed the cases down to was doing the wrong thing, so I fixed that.

So here is what I am looking for now: someone from SBCL will look at my patch, hopefully between now and the next release at the end of this month, maybe make a few modifications for efficiency (and correctness on platforms with weird select(2) semantics in libc), and commit it.

At some point after that, SLIME developers and users will download SBCL with my fix included.

At some point after that, someone will change the communication style in SBCL/SLIME to :fd-handler, perhaps because he or she didn't build it with threads (or doesn't like threads, like me) and find that SBCL goes into an infinite loop when Emacs disconnects from it, just like CLISP once did.

Then, if need be, I will post a link back to the original SLIME fix, a three-line modification I'll no longer have to include in the nocandy-util tarball.

My observations of the workings of software projects tell me that the world of software that everyone uses spins around little half-technical, half-social processes like the one described above. Though it might frustrate at times, at least I can take comfort that in a world without free software, I would be stuck forever with the broken behavior.

Also this week: Spring Break Common Lisp! is a great new expansion of the semi-obscure SBCL. Its meaning is pretty clear.

A nightmare about SBCL

Tue, 13 Dec 2005 22:48:01 +0000 by Stephen Compall

Background: SBCL is Steel Bank Common Lisp, a popular native-compiling Lisp programming system. As the manual explains, a compiler doing everything that a "sufficiently smart" compiler would do, according to the CL specification, would be far more advanced than the most advanced compilers available today.

Many bloggers, particularly the LiveJournal variety, like to write about stuff that means everything to them and nothing to everyone else. Dreams are a great example.

I have this specific memory, and from the lack of a context I must infer that it comes from a dream, or a nightmare, to be more precise.

In this memory, I am reading a posted public announcement, stating that I have been hired to write a new "sufficiently smart" compiler for SBCL, for $100,000, due in 8 months. I don't recall agreeing to this; after all, I am unfamiliar with both SBCL and compilers in general.

The prospect of money didn't help. It only created a sense of obligation, and therefore panic.

How to extract a blocking read/write into FD-HANDLER

Sun, 20 Nov 2005 07:10:19 +0000 by Stephen Compall

FD-HANDLER is an as-yet-unwritten package from No Candy Software encapsulating the ideal solution for multiplexing in a single-threaded program, for CLISP. The idea is that you collect all the file descriptors that could block your application and pass them to a system call, select(2) or poll(2), which returns when any of the file descriptors is ready for the next non-blocking read. For example, you could multiplex a GUI network operation by polling on the network socket and your X server connection at the same time.

The problem is that many libraries abstract to the degree that you can't get at the file descriptor you need for this. Sometimes, that's because it does all I/O solely by blocking deep in the call stack. This is easy to hop out of if the program is functional and you have continuations like in Scheme, but not so much in Common Lisp.

Let's take connections to Swank, which is what I'm trying to multiplex with a GTK-server connection at the moment. You invoke create-swank-server in CLISP, which calls setup-server. Then, it calls serve-connection, which calls accept-authenticated-connection, which calls accept-connection, which blocks. This is easy enough to abstract; in fact, they've already done it for systems with fd-handler implementations.

Here's when things go haywire, though: to authenticate, a-a-c calls decode-message, which calls decode-message-length, which calls read-char, which blocks. Fortunately, it seems that all reads go through decode-message.

What decode-message really needs to do is exit nonlocally, passing out its continuation. Setting aside the details of all the reading done in decode-message, what is that continuation?

  1. Parse the message with read-from-string.
  2. Log the event (this may entail another write, but we're ignoring writes for now under the correct assumption that write blocks aren't nearly as important as read blocks).
  3. Exit the handler case (!) and pass the form back to "the caller". Setting aside the question "who is that?", let's assume it's a-a-c.
  4. Ensure the secret matches the other secret, and pass the socket back to serve-connection.
  5. Run the new connection hook, add this to the list of connections, and enter simple-serve-requests (in the nil handler case, as it is in CLISP).
I will add at this point that there is already a reasonably good dispatcher from this point on for implementations with an FD handler. The only oddity is the debugger hook, but it of course wouldn't make sense to allow Lisp to continue while in the debugger hook, so we'll chalk that up as okay, with the caveat that a wrapper might be needed first to "finish things that ought to be finished before hanging".

Fortunately, we can use the above example to see what needs to be done to really fix it up. As it is, we start deep in the stack in I/O, move up some back close to the entry point, then down again to a blocking read in simple-serve-requests/handle-request. I stopped here because, in a dispatching system, we've returned back up to the dispatcher by this point. So, what we need is an "inverted" flow of control. This isn't as hard as it sounds. Let's look at the closure necessary, including the state it saves:

  1. Check whether we are waiting on authentication; if not, continue with the usual Slime FD handler.
  2. Add as many characters as you can, non-blocking, to a buffer.
  3. Compare them to the secret; if they match, set the state to not-waiting-on-auth, and pretend the rest of the buffer is ready for non-blocking read.
Looking at it another way, consider a blocking program as a loop: wait for input, process it when you have enough. If you drag the reads out of the stack, then convert the implicit state in that stack to explicit state, then encapsulate "process it when you have enough" as a single-run procedure or whatnot, you have an inverted stack. yojnE.

Price "gouging": learn to love it

Wed, 05 Oct 2005 22:48:06 +0000 by Stephen Compall

Just 3 days after the landfall of Hurricane Rita, and just 3 weeks after that of Hurricane Katrina, 90% of U.S. crude production capacity was down, as well as 25% of refining capacity. However, Exxon Mobil's market capitalization was about $407 billion, just recently pushed up about 10% by the hurricanes' arrival, past General Electric to become the most valuable company in the U.S.

The reason, of course, is that expectations of supply limitations, now and in the months to come, have pushed petrol prices up by about 50¢/gal, making for rosy profit outlooks at oil companies. This knowledge has led, typically, to accusations of "price gouging".

The reasoning goes something like this: petrol stations have, allowing for regional differences, uniformly increased prices by a great degree over a short period, and their profits have gone up, so they simply must be ripping us off!

This is due to the mistaken impression that prices are a function of the costs of production, plus a "reasonable" profit. These factors only have an indirect effect on supply.

Fair prices are a function of supply and demand. The lower the supply, or the higher the demand, the greater the cost. The market price—and sold quantity—is called "equilibrium". To see why prices tend toward equilibrium in a market of competition among both consumers and producers, as is the case in oil and petrol, consider attempts by either side to push prices in "their" favor.

First, there is the case of producers—vilified "Big Oil", in this case—pushing prices above the market. Any oil company could grab all the sales by slightly reducing the price, which would force all others to reduce price, and so on, until the price is held up by factors beyond Big Oil's control, i.e. consumers.

Then, there is the case of consumers, the reasonable and the self-important that beg this response. There is only so much oil to go around. Let us say that everyone says they will only pay a below-market price. In this case, anyone who actually wants to get the oil he or she needs will have to pay more, and so the price is forced upwards until held down by the aforementioned producer factors.

Unfortunately, we get some weird ideas in our heads when supply and demand shift. We say things like "the market has failed" and get government to force the price down. This is the source of "price gouging" legislation.

What happens when we artificially limit the price to below the market rate? Since the price is no longer tempering demand, and supply is reduced by the reduced incentive to provide it, demand far exceeds supply. The outcome is a shortage. This is further exacerbated by the market shift to the illegal economy: people can make money off the spread between legal and market rate, by buying up inventory and selling on the black market.

Would you rather have everyone get the gasoline they need at a higher price, or "roll the dice" in the wake of a shortage for your needed supply of a needed commodity? Considering that the current price rises are perfectly reasonable, given the reduced production capacity and ever-increasing demand, I suggest that you learn to appreciate price rises for what they are: fair.

A meaningless sacrifice for equality

Mon, 19 Sep 2005 21:11:46 +0000 by Stephen Compall

There is a word often applied to classical liberal (in the terminology of Milton Friedman; “libertarian” often today) policies of regulation of economic activity: austerity. In Webster's Revised Unabridged Dictionary (1913), the applicable meaning in our context seems to be “severity of manners or life; extreme rigor or strictness; harsh discipline”. It does not seem to pass judgement on the practice, but only to insist that it will be difficult to pursue, as it is with abstaining from many of the various popular vices of today's society.

Because of austerity's unpopularity, an issue to be explored in a moment, a worthwhile degree of spending and regulatory restraint by government requires a strong mandate from a public tired of the economic stagnation brought on by its absence. In essence, it requires voters to have some nerve and abandon their usual desire for a government security blanket.

Had the parliamentary election in Germany taken place a week or two before it did on Sunday, Angela Merkel's CDU may have received that mandate. However, an admittedly brilliant turn in campaigning by incumbent Chancellor Gerhard Schroeder, painting Merkel's proposals for economic reform as “socially unfair”, had Germans losing their nerve and voting his SPD to within a percentage point of the CDU. Now both leaders claim victory, as in German parliament the ability to form a majority through coalitions determines chancellorship, but this is clearly a crushing defeat of austerity.

The Wall Street Journal reports that “Many voters found Mr. Schroeder's economic record dismal but also objected to Ms. Merkel's proposals as socially unfair.” For the consequences of maintaining the status quo, consider Germany's average economic growth of about 1.5% and its unemployment rate of over 11%.

Since, due to the outcome of the election, we must consider the sum of Merkel's and Schroeder's positions as nearly equal, and before the birth of the aforementioned objections to Merkel she was considered worth far more, we must assume that the loss of value was entirely due to the affixing of this “socially unfair” label to the CDU. Clearly, this is an exceptionally powerful concept, and my purpose is to explore its meaning.

Angela Merkel seeks to reduce regulation and the welfare state; this is what many voters object to, so socially unfair must mean “unequal”. Unequal, in the context of the welfare state, refers to inequality of outcome, not inequality of opportunity. That is, everyone is always expected to play by the rules of the game; however, an unequal outcome refers to the fact that some do better than others. It should of course be remembered that, while indeed a game, this is a very important one.

Is inequality of outcome, absent a system of redistribution such as that provided by a welfare state, really necessary? Easily, considering the extreme improbability of the alternative. By luck and skill, we that inhabit this planet are a widely varying collection of cooperative sorts, each seeking to create wealth (in its most general sense) for ourselves and others, the latter only in satisfaction of the former. Barring sentiment, it is not reasonably probable that we should all be able to create the same amount of wealth.

This is precisely what a rational person ought to do: bar sentiment. Recognize that the contributions of some are worth more, in some cases far more, than the contributions of others. (No programmer should be able to object to this on any but sentimental grounds.)

In most cases, the promise of reaping the benefits of additional wealth creation for ourselves motivates us to create that wealth; we must recognize that, as we are the ones trading currency for that wealth! You surely recognize that promise; would you work harder if it meant being paid double? Barring changes of circumstances, would you work harder anyways?

The effect on economic growth, or the creation of wealth in the above discussion, of a welfare state is usually to stunt it. When we remove the incentive from those are able to create a great deal of wealth—as we do when we “tax the rich” and enact other such populist measures—we, ah, remove the incentive, hence the wealth is not created, or certainly not to the degree it would have been. This is not some abstraction; on the net, we lose wealth when it is not ever created. The fatal mistake of populism is failing to consider or take responsibility for its consequences.

Taking into account these consequences, I will finally consider an interesting psychological phenomenon. Here, the subject is presented with a choice, implemented through some manner of exchange: would you rather be absolutely rich but relatively poor, or absolutely poor but relatively rich? The surprise, or obvious conclusion given the above, is that most would choose the latter. The difference is that, absolutely (!) speaking, the former is better off, but the latter is better off only when compared to his or her peers. Therefore, it must be more important that everyone else have less than that we have more.

This is an intense sort of vindictiveness I am sad to see in a seemingly enlightened society. The Germans will pay for their jealousy and vindictiveness; I will hope for our sake that we can get over these serious flaws in our characters.

The tangential issue at hand

Fri, 12 Aug 2005 05:25:06 +0000 by Stephen Compall

I wrote some words about web accessibility for FC.o in response to a proposal by the Copyright Office to require Internet Explorer for some new preregistration lollie. In it, I argued that requiring IE "imposes tangential burdens on the public", and that they had no justification for doing such a thing in general, and therefore not in this specific case.

However, to successfully argue this issue, tangential itself, I had to avoid the real issue, which is that there is preregistration of copyright at all. This was itself a compromise with the movie industry, gone to Congress once again hopping mad about our failure to diminish ourselves and our future to prop up an archaic business model.

That doesn't mean it's not important to say. We ought always say the things that are right, in the hopes that in any case those in the wrong may be corrected. Stallman tells us not to overestimate the "small puddle of freedom" the Free Software movement has built. As with Stallman, maybe I can't support all the right causes, but this is one to which I think I can truly contribute.

To reward authors…for what?

Fri, 05 Aug 2005 19:48:16 +0000 by Stephen Compall

Recently, I have twice found myself correcting a mistaken impression of the operation of copyright law. For a general treatise on the subject, please see Misinterpreting Copyright by RMS. I will here explain the issue in full, in hopes that the mistaken can be lead to understanding with less live discussion.

First, for those unfamiliar with the U.S. Constitution, it gives the power to Congress to grant limited monopolies, in the form of copyrights and patents, solely "to promote the progress of science and the useful arts". This is the only legal justification for granting copyright, as the 10th Amendment limits Congress's powers to those explicitly granted in the Constitution.

Why is this supposedly effective? The other parties in the discussions I've had correctly state that it is because the system rewards authors. However—and here is the confusing part—the reward we give is the promise of limited monopoly. The monopoly itself is not the reward, but the pollution caused by that promise. This is an important distinction, as I will explain.

Capitalism's function is to reduce scarcity; i.e., to give people what they want. It is the most effective way known to do this, as proven by the success of the Industrial Revolution. We make voluntary trades because they are beneficial, and all parties of a voluntary trade are enriched by its making. This is only necessary when a wanted good or service is scarce; if it is abundant, we may share as we please. This process increases the abundance of the good or service within society, as the purchasers receive more value in its terms than they give. Another way of looking at this is saying that, in the free market, a proper business cannibalizes itself.

By the above definition, no system that increases scarcity, rather than abundance, can be considered an appropriate business. Copyright, in this sense, is wholly inappropriate: copies, which are so cheap to make as to be considered abundant, are made scarce by restricting the legal right to copy to one person or entity.

Copyright, however, aims to increase abundance of one service within the whole system: creativity, or "the progress of science and the useful arts". Unfortunately, in the process, it makes scarce the natural abundance of copying.

Why can't we simply reward creativity, then? The problem is that some creativity is worth more than other creativity. I won't point any fingers. The best measurement of worth we have is popularity, a crude measurement always, but easily translated into a market force through copyright.

Encouraging creativity and its distribution is worth our while. Some, however, believe that copyright also serves to help artists "protect" authored works, by which they mean "control", and that it serves to help artists "make a living".

These aims have questionable value, and more importantly no basis in the tradition of a free market. I borrow an example from Milton Friedman to illustrate. Suppose there is an opera singer in a community of music lovers. Every Friday night, the opera singer performs for a large audience. Suppose that then a jazz singer takes up performing in the community, though in a different venue, at the same time. The music aficionados find that they prefer jazz music, and the opera singer's audience drops as the jazz singer's audience grows.

Apologists for the "help artists" view of copyright often refer to copyright infringement as "harm", and call it "theft". This is so common that lawyers for copyright holders have been asked by judges to stop referring to it as theft. It is not theft because it no more harms artists than the jazz singer harms the opera singer. The opera singer has no legal or moral defense against the jazz singer, and when seen this way, one wonders how anyone could think otherwise.

As a result, the burden is on copyright's defenders to show that it encourages creativity, because its attribute of giving legal control to one person or entity is a drawback, not a feature.

The only reasonable way to demonstrate this is to say that, had the prospect of having exclusive control not been in place, creativity would have been lost, and its value would have been more than that of the freedom to copy we would have had otherwise. (One must always deduct the value of lost freedom when considering the worth of copyright protection.) Therefore, the promise is the important thing. We only follow through with the promise of absolute control because the promise made for future works would only be believable if it had been kept for past works.

Returning to Lisp

Wed, 03 Aug 2005 04:22:00 +0000 by Stephen Compall

A few months ago, I returned to the Lisp programming language. I first discovered Lisp 3 years ago; at that time, I realized the Lisp family of languages was the best ever created, and "modern" languages have not yet caught up. Unfortunately, I was concerned about its relative unpopularity. So I continued programming in other languages, that are middling in popularity but still far better than those to which most programmers subject themselves.

When next I looked, I discovered lively communities indeed hiding from the general distaste for Lisp's superior syntax, or lack thereof. I am looking forward to Common Lisp/Scheme world domination.

Meanwhile, I wrote my own blog software in Common Lisp. It just does blog entry decoding and HTML and RSS generation, but it's only about 50 lines long. I'll take it over big buggy PHP packages any day.