Wednesday, March 28, 2012
Thursday, March 22, 2012
Tuesday, March 20, 2012
Monday, March 19, 2012
Sunday, March 18, 2012
Saturday, March 17, 2012
Friday, March 16, 2012
Monday, March 12, 2012
Saturday, March 10, 2012
Friday, March 9, 2012
Thursday, March 8, 2012
Tuesday, March 6, 2012
A Survey of Web API client code on CPAN
Why a survey? And how do you start?
For the past few years, I've organized most of my thinking on Blogger – I first got into it while keeping various friends posted on my efforts with house renovation, and it just kind of stuck. Now I tend to start a new blog for every project I undertake. At some point (actually, on December 17, 2011) I had the bright idea that I should be able to do my task management right in Blogger as well, perhaps by the simple expedient of typing a title like "Task: do XXX" right into a blog post.
Earlier that day, I had realized that Blogger has an API, and suddenly, it was obvious how to proceed with this plan. I needed to write a Web API client to build my task indexer.
But like nearly everything I do, I was beset by the sudden fear that I might do it wrong. Maybe I'd be making assumptions I'd regret. Maybe other people were doing it better. (Note to self: this is why you never get anything done.)
I've got very little time to work on side projects – two teenagers, a full-time freelance translation business, and the aforementioned house renovation project make sure of that – so essentially everything technical is on the back burner, and so this one stayed as well, while I chewed on my fear. Occasionally in an off-moment I'd hit CPAN and look for modules that implemented other API clients, and I'd wonder what sorts of functionality might be nice in a more general Web API client support module. Finally, I just started scanning down the list of modules a search returned for "RESTful API", with the vague idea of doing a more or less comprehensive survey. Then I saw the WebService namespace and realized it contains over thirteen hundred modules. Good God. Not something I could actually survey in any meaningful way.
Clearly I needed to search CPAN in a more specifically useful manner. And just as clearly, I needed to do that locally. Which led me to CPAN::Mini. Randall Schwarz wrote this in 2002 when a colleague asked him for a CD with CPAN burned on it and he realized that the size of "the CPAN" (when did we drop the "the"? Or is it just me?) was far too large, but a "mini-CPAN" with just the latest version of each module would be 200 MB and easily fit on a CD.
As of this writing, of course, even a mini-CPAN won't fit on a CD, being 1.84 GB in over 30,000 files. But I downloaded it anyway. I have a CPAN.
What I'm going to do first is just to find all the dependencies on LWP, WWW::Curl, Net::Curl, HTTP::Client, HTTP::Client::Parallel, HTTP::Tiny, and HTTP::Lite. If I run across any other basic HTTP clients, I'll include them in the seed list as well.
No, wait, I guess what I'm going to do first is to try to come up with a more or less complete list of HTTP clients on CPAN, while whistling past the infinite-regress graveyard. (Note: this is a TODO in the article.)
Anyway, the modules we find that way will break down into three categories: (1) modules that implement an API client, (2) support modules that provide an API client framework, and (3) modules that just retrieve HTTP for other purposes, which we'll ignore. Then I'll repeat the step for the modules found in (2) to find indirect dependencies. Obviously, the tool I want is something that can take an input module name and return a list of all modules that depend on it, so I'll do that in the next section.
It might be instructive to get a list of all the URLs used in these APIs. But my ultimate goal here is to see how people are doing things, and see how many of these implementations might be useful in coming up with best Perl practices for writing a Web API client.
Monday, March 5, 2012
- Perl walker to scan a list of Web comic sites for each user. (Obviously the sites are shared.) This spider checks for update on, say, an hourly basis. If the site has a feed, I'll use that. If the site pushes an email notification, I'll use that. One way or another, though, I'll figure out what changes and when.
- For each list of toons, then, we can present a list of updates since the user last checked in and read. That list will show ads, but only that list will show ads. My ads will never appear on the screen at the same time as any comic. That's pretty thin monetization, but it will have to do.
- The reader consists of a very thin frame at the top with forward and back buttons and a title. No ads on the frame. No ads on the frame. No ads on the frame. The bottom frame is then the entire target URL, with the cartoonist's own ads.
- A comic counts as read when you've gone to the next page (in case you get called away, lose your connection, whatever). So we have a bookmark for each and every comic we read.
- With multiple users, we'll be able to start forming a similarity metric for recommendations.
- http://search.cpan.org/~jwied/BZ-Client-1.04/lib/BZ/Client.pm (very complex)
- Individual specific APIs and
- API support modules.
- Find (what I believe to be) a complete list of all web API modules on CPAN, with authors and place in the nomenclature. List any support modules they use.
- Find any support modules that seem likely that aren't in use by existing APIs on CPAN.
- Provide an initial statistical analysis of some sort.
- Compare code and techniques between all these modules.
- Derive a descriptive language for the client side of an API and a mapping between this language and the modules in existence. Or something. Mostly I just want to do the comparison.
I have this goal to record each and every expense in the household and categorize them for budgeting. Unfortunately, for the past four years I've failed to meet that goal. The problem is it's so difficult to keep up with entry of the paper receipts - this involves a great deal of context switching between paper and screen to find where the date, amount, and destination of each expense is.
- Delete mis-scans (if the receipt doesn't quite engage, sometimes there's a little blurb that isn't actually anything). This I can do manually after each scanning session.
- Shrink the files - I don't actually need 300 dpi quality for these, and at about 400 kB a pop, my 80's self is offended by the size of the data.
- Merge any two-scan receipts - the scanner gives up after about eight inches, knowing it's not actually a plausible length and assuming your photo has jammed. For long receipts like grocery shopping at Meijer's, I'll scan receipts in two sections. Using physical scissors. Then I want to group them as a single receipt.
- Ideally, straighten the scan up. The receipts are too narrow for the scanner to detect them if they're against the guide rail of the bed, so I scan down the middle of the bed - the result is that they're all slightly slanted. Some move a little during the scan, so they're also bent. Not much to do about that.
- Ideally, OCR them.
- Using a combination of OCR and a viewer application (this would be a simple GUI with a viewer for the graphic and a record entry for the data), verify any OCR'd data or enter the data if OCR can't get it.
- Index everything into a SQLite database, along with non-receipt expenses such as checks or online payments. Categorize and report using something analogous to the Access database I built in the 90's.