Tuesday, April 29, 2014

Code challenges

A nice article recommending the solution of code challenges against an autojudge to improve programming skill.

Saturday, April 26, 2014

Decl and top-heaviness

Man, reading through all the stuff the v0.11 Decl::Node object supports, it's really no wonder I bogged down. It was just doing too much. I really hope that splitting things out into syntactic and semantic poles will make a difference.  (Or really, even more than just the two, given the declarative extraction phase in the middle.)

So yeah, I suppose a post on that is in order.  The new regime is finally getting underway, given that the last update to Decl was in 2011 and it's 2014 now.  I've started coding Decl::Syntax, which is the handling of syntactic nodes.

Note that a syntactic node is used to derive two different sets of semantics. The first is the machine semantics, the second being the human semantics. This is equivalent to the concept of literate programming, except that literate programming also parses the code chunks for indexing, which (initially, and maybe permanently) we will not be doing.

So the surface structure is the indented stuff. To derive the machine semantics, we go through two more phases.  Actually, three.

The first is markup. During markup, a Markdown ruleset is used to convert all Markdown nodes into X-expressions. The ruleset can be specified in the input, or can be one of a few named ones.

After markup comes declarative extraction. Here, we extract a tree of declarative nodes from the syntactic structure. These contain only the "true children" of each tag. X-expressions are converted to tag structures during this phase, and transclusions are resolved. Annotations are inserted into structured parameter values.  Macros might be expressed, I won't know this until I try expressing some things with prototypes.

The result of declarative extraction is a thinner tag structure that contains only machine-meaningful information. Anything explanatory is discarded, although obviously it's still available for examination if there's a need.

After extraction comes semantic mapping. Here, a set of vocabularies map declarative structure onto data structures. A default vocabulary might just map everything into vanilla Perl structures or objects, but more interesting vocabularies will build more interesting objects.

Finally, execution does whatever action is encoded by the semantic structure. This runs code, builds documents, activates the GUI, or whatever.

Keeping these phases strictly separate makes it possible to build all that detailed functionality into this system without losing sight of what's where. Or so I fervently hope.

Friday, April 25, 2014

Reactive programming again

Two little JS frameworks: ripple.js, which aims to be tiny, and "React.js and Bacon", a look at another way to do reactive stuff.

Monday, April 21, 2014

Article: KeePass through SSL with Perl

New article at the Vivtek site on accessing KeePass using the KeePass plugin from Perl. I ran through progressively more elegant prototypes before coming up with a nice wrapper. I released the whole shebang as a CPAN module WWW::KeePassRest, which uses a new JSON API wrapper that is ... minimalistic in its design.

It'd be nice to be nice and principled about API wrappers on something like this basis, but that's definitely way down the priority list.

Friday, April 18, 2014

Code reading (and by extension, code presentation)

Here's an article by Peter Seibel I missed in January: Code is not Literature. Instead of reading code like literature seminars, we should rather consider presentation of code more like what naturalists do: "Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs."

This really resonates with me, as it's more or less what I've got in mind with exegesis: a list of articles focusing on sections of the code, highlighting interesting techniques and extracting the knowledge embedded in it (and as the technology matures, also extracting some of that knowledge in a reusable form of some kind).

James Hague then weighs in today with "You don't read code, you explore it," saying essentially the same thing, and adding that only by interacting with the code does he feel as though he achieves true understanding (and mentioning Ken Iverson's interactive J presentations, which sound pretty interesting as well).

So there you go. What people are thinking about writing about code.

As practice, I've written two articles on vivtek.com in the past week and am well into a third: one on TRADOS 2007 and its language codes, so far presenting only a prototype script, a list of the codes used in a convenient format, and explaining a little about discoveries I made on the way; and one on Windows UAC and how to use it from Perl, which I backed up by publishing the module Win32::RunAsAdmin to CPAN.

If I can keep up something like this pace, I'll have fifty articles in a year. That's a lot of writing - and honestly, I have a lot of things to write about. I just wish there were more examples of writing about code for me to emulate. I'm still looking for source material.


GlobalSight is a translation management system that was closed-source until 2008 (I believe). After its acquisition it was open-sourced by replacing a few dependencies with open-source equivalents, which is pretty excellent.

At any rate, this is an open-source target I'd like to put a little effort into, given my actual income structure.

Thursday, April 17, 2014

ScraperWiki closed?

Huh. The open ScraperWiki forum structure seems to have been closed up. That's a shame. I wonder where people interested in scraping congregate now.  (Well, now it's Big Data and monetized, I guess. Maybe there is no such general-interest forum now that it's getting ramified like that.)

Wednesday, April 16, 2014


Who the gods would destroy, they first give real-time analytics. (Ha.) Because not waiting for a reliable sample is bad, bad statistics.

That said, I do want real-time reporting on incoming links and searches, and Google Analytics is abysmal on that front, as I've mentioned in the past. Now that I've moved the static content at Vivtek.com over onto Github ... well. I did that a year and a half ago, but now that I'm writing again and care about incoming interest, and given that I don't have my raw traffic logs any more because that's not something Github does, I need something better than Google stats.

The answer is a system I've noted in passing before: Piwik. It not only includes the JS bug to phone home, it also provides full reporting in a dashboard you configure on your own host. As soon as I get two minutes, I'm going to go ahead and convert Vivtek.com to Piwik, and then I can actually know what people want to read about.

Monday, April 14, 2014

Code read through Plack

I'm studying ways to write about code, and here is a short article series about Plack.


An open-source agent framework written in Ruby. Looks sweet.


Oh, here's a cool little thing Mark Dootson did to manipulate executable files on Windows: Win32::Exe.

Sunday, April 13, 2014

Article: TRADOS 2007 and its language codes

I wrote a technical article for the Vivtek site today for the first time since 2009. I had to rewrite the publication system for the whole site to make that work, too. Very instructive!

Anyway, it's the saga of building a useful tool for my technical translation business. It's just a prototype; eventually I'll wrap it all up into a nice module and write another article on that.

Saturday, April 12, 2014

Thursday, April 10, 2014


Wow. Log::Log4Perl implements the perfect in-code logging system for all your Perl coding needs. It is a thing of sheer beauty.

Saturday, April 5, 2014

Programming the Lego RCX and NXT

The RCX and NXT are little embeddable processors for robots control. There's a lot of RCX/NXT hacking information out there. Great RCX page here, and two languages, NBC and NQC.

Autocommitting under git

Since I use Github to serve my site, a git autocommit has to be part of my publishing process.  Here are ways to do that, at StackExchange.

Directory Monitor

Here's a useful little tool for Windows automation: Directory Monitor. The same guy has a good command-line mailer, too (which can handle attachments, a real problem with command-line mailing under Windows).

Friday, April 4, 2014

Exegesis as normal publishing tool

I've been kicking around the notion of a "code exegesis" for a little while, which is the attempt to take some software project (in the simplest case a single file) and to "back-comment" it, that is, explain the author's intent and strategies in the development of the code, as well as possible, and also to focus on different aspects of it in a series of separate articles (or chapters if the whole work is considered a book).

This is an exegesis as classically understood - detailed commentary on the ideas and history behind a given work, often scripture but also e.g. Homer. I call this "interpretive exegesis" to distinguish it from literate programming, which is essentially the same thing except that it independently *generates* the code, so I call it "generative exegesis".

With me so far?

All the publishing I want to do at this point is code-based. So far I had considered doing a Markdown-enabled Perl weaver that is essentially Jekyll in Perl, so I called it Heckle. It was entirely vapor, fortunately - because I'm renaming it "Exegete" instead. I'm going to use exegesis as the basis for all my publishing, because I'm going to be quoting from things all the time anyway. The same document organization tools could be used for anything, not just exegesis, but honestly, it's still a great name.

There are a couple more ideas here.

First is the realization that the same explanatory exegetical structure would be doubly appropriate for binaries, for disassembly and reverse-engineering. Here, instead of a dynamic picture like a conventional disassembly tool (which can be seen as a kind of explorer/browser), we'd explicitly be supporting the writing of articles about a given binary structure, but overall the same principles as IDA or Radare would apply: the identification of small extents that express a given set of actions and ideas.

And then there's the notion of a "derivative work" - a kind of hybrid of interpretation and generation which transforms the original into a new work with changes. This is not going to be a very normal mode for most purposes, because it's not the same as normal maintenance, which is typically done in a more evolutionary fashion. This is definitely intended for those punctuational cases like porting, or reimplementation of archeological finds from the 70's or something. A good term for this would be a "transformational exegesis".

And of course it would be perfect for patching binaries or similar reverse-engineering tasks.

So that's kind of where my thinking is at. Since all this involves the writing of text, probably extensive text, that includes references to and quotations of code objects, it's pretty much ideal for the kind of tech writing I want to do anyway.

Wednesday, April 2, 2014

Attempto controlled English

A controlled/minimal grammar for pseudo-English that can be used for expressing specifications and so forth. Neat project, and parsable without leaving the Slow Zone.


A little article about Keybase that I'm too tired to understand right now. I'll get back to it.

Doge grammar

Cool - a linguist writes about Doge. Also, Dogescript. So grammar. Much script. Wow.

Tuesday, April 1, 2014

Bootstrapping a compiler from ... the editor

This is a fun little thing! An article from 2001 about bootstrapping a compiler for a simple language starting from just a text editor to enter hex files. This looks fun!

Source open news

Remember Source, the open news tool consortium/group/whatever? They've got a code directory. Good target.