Documenting JavaScript – jGrouseDoc

Over the years, I’ve spent a lot of time searching for (and even creating) tools that do automatic documentation of JavaScript. As anyone familiar with this issue can attest to, it’s not an easy problem to solve, and the tools available seem to yield mediocre results at best.

There is, however, a new entry into the field that I’ve been using for month or so now, and that I find myself pleasantly surprised by. The tool is jGrouseDoc (jGD), by Denis Riabtchik. Unlike it’s predecessors, jGD starts with the premise that attempting to parse JavaScript source code is a futile endeavor. The language is not designed to express the high-level concepts and patterns that engineers strive to emulate (and document) in their design.

Instead, the tool looks exlcusively at the information in the comments you author, and provides a rich set of tags for you to annotate your comments with information about the logical structure. This requires a bit more typing, but frees the author and the tool from the shackles of having to figure out how to bolt documentation into the source in awkward ways.

For anyone familiar with JavaDoc, you’ll find the syntax easy to pick up. Here’s what it looks like ..

/**
 * Similar to Prototype's Template object, except the template is a DOM
 * element instead of a string.
 *
 * @class DomTemplate
 */

/** Hash of template id to DomTemplate objects
 * @variable {static Object} _byId
 */

/**
 * Get a dom template by name.  The returned template is cached, so
 * subsequent calls (for the same template) are efficient.
 *
 * @function {static DomTemplate} getTemplate
 * @param {String} id element id
 */

/**
 * Get a DOM element to use for doing DOM manipulations
 *
 * @function {private Element} _getHelper
 */

/**
 * Use DomTemplate.getTemplate()
 *
 * @constructor {private} initialize
 */

/**
 * Similar to evaluate(), except returns a DOM element. Note that for bulk
 * operations, it's more efficient to use evaluate() to create the HTML
 * string and then apply that using innerHTML.
 *
 * @function {Element} evaluateElement
 * @param {Object} data key-value pairs for token replacement
 */

MoinMoin Wiki Syntax Highlighting for VI (VIM)

Regular readers of this blog can probably just stop right here. I’m using this entry as a way to get my new MoinMoin wiki syntax highlighting tool into google. What follows is only relevent the cross section of people that use both Trac and VI.

For those of you who do use trac, you’re probably aware of how difficult it is to edit large wiki pages. The reasons for this are manifold, but they start with the fact that you are constantly scrolling back and forth between the read-only, HTML-rendered version of the document and the editable, wiki-text version.

My first-pass solution to this was to simply buy celebrex from canada copy/paste the wiki text into my editor of choice (vi a.k.a. vim a.k.a. gvim) . But this doesn’t address the problem since the lack of formatting and syntax highlighting reduces larger documents to a morass of mostly homogenous text. Finding the sections you want to work on is problematic.

To solve this, I put together a vi syntax file for highlighting MoinMoin formatted wiki text. You can get it here. For instructions on how to install it, do “:help new-filetype” and follow the steps in section C (“If your file type can be detected by the file name…”)

UI App Rants


Disclaimer: This is your typical rant – it’s purely selfish in nature, there is no underlying purpose, no hidden social or political commentary, and no wisdom to be gained. It’s just me, griping about minutia that really doesn’t matter in the grand scheme of things.

After a long hiatus, I’m back to doing UI mockups. Which means I need a good drawing application that’s well suited to this kind of work. The requirements are not too outrageous:

  • Decent vector art tools
  • Decent bitmap art tools
  • Good coordinate system tools (gridlines, guides, “snap to” capabilities)
  • Make repetitive tasks easy (copy/paste graphic styles)
  • Good support for rendering text

So why is it that the apps that are available, and widely acclaimed, are so ill-suited to this kind of work? One can only conclude that every UI designer out there is hideously overpaid, because 90% of what they do is not UI design. No, they spend the vast majority of their day tearing their hair out dealing with these embarrasing apps:

Adobe Photoshop: Why do people use this thing, why! Easily the most overrated, overpriced, piece of Shinola I’ve ever seen. Adobe must be making a ton of money off NASA because www.health-canada-pharmacy.com Photoshop is about as expensive as your average space mission, and you have to be a frickin’ rocket scientist to use it. Want to do an N-dimensional, Fast Fourier Transform? No problem, there’s a nice big button for it. But want to draw a dashed line? “Hmm… there must be a control for that somewhere … no, that’s not it… hmm, maybe this? Nope, not that either … ah, here’s a tutorial … whups, that only does horizontal and vertical lines…” And the entire application is like that!

Adobe Illustrator: The perfect tool if you’re twisted enough to understand how Photoshop works… and want everything to look like a Nagel print.

Macromedia Fireworks: Oh so tantalizingly close to being an awesome tool. Decent bitmap and vector art tools, a relatively straight-forward UI. Too bad it grinds to a halt if you ask it to draw more than a paragraph of text anywhere. :-(

Okay, I’m done venting now. Guess I’ll go back to good ol’ DHTML. All you need is a decent browser (Firefox) and a good text-editor (VI)… oh, and maybe Photoshop to help you tweak the artwork for buttons. *sigh*

Broofa’s Readability Analyzer

Okay, this one goes out to all you hard-core geeks out there! I’d like to introduce Broofa’s Readability Analyzer.

For the Impatient

What is it?
A tool for determining web page readability.
What’s it do?
At the bottom of every web page, you’ll see how “readable” the page is, based on a combination of computed scores.
How do I use it?
Make sure you are using the Firefox browser w/ the Greasemonkey extension. Then install the Readability Analyzer. (either right-click that link, or navigate to the script and click the “Install” button that GreaseMonkey should show you.)

Background
There are several methods for rating the “Readability” [wikipedia.org] of a document. All of these work in more or less the same way: Count the number of syllables, words, and sentences in a representitive sample of text, plug them into a formula, and out comes a result that is (usually) the grade level required for a reader to understand the text.

Readability calculations are not definitive by any means. But they are useful, especially for writers. They provide a way to gauge the complexity of one’s writing and adjust it accordingly. For example, web documents should usually be targeted at the 6th-9th grade level, by using slightly shorter sentences and a less sophisticated vocabulary. Technical documentation should of course be slightly higher-brow (longer sentences, bigger words).

Readability Systems
One thing that is abundantly clear with the Readability Analyzer is that the various systems referenced under the Wikipedia entry above are not consistent. Different systems can be up to 4 or 5 years different in reading level they quote. Thus, the overall readability grade it reports is actually an average of all five (count ’em, five!) algorithms.

Also, the “easy/difficult” rating it reports assumes your “average” reader has the skills of an 8th or 9th grader.

Understanding Web Documents
One of the challenges in rating web page readability is in determining what qualifies as “readable” content. Web pages contain a lot of extraneous information (navigation bars, table of contents, etc.) In theory, one could just find all the paragraph blocks and pull the text out of those. Unfortunately page authors often use different elements to hold text.

To work around this, the analyzer looks for common text-containing elements (DIV, DT, DL, LI, PRE, etc …) and uses those. It also tries to be smart about throwing away paragraphs that appear anomalous in length (too short) or content (too many non-word characters or too much markup). The result is that it does a “reasonable” job of pulling the most interesting text out of a page.

Counting Syllables
Several of the algorithms used rely on the notion of “complex words” – words of 3 or more syllables (this is reported in the Readability Analyzer as “hardwords”). Others simply use the total syllable count. Either way, this presents a problem for any automated tool because English is a very bizarre language. There are lots of exceptions to every rule, especially where pronunciation is concerned. To avoid an elaborate and time-consuming syllable counting implementation I instead “borrowed” the fairly simple algorithm used by the WordPress Statistics plugin. It merely counts consonent-vowel pairings, with a couple extra rules for common-sense exceptions. After adding my own fudge-factor to the mix, the result is a syllable count that is not exact, but probably “good enough” for most purposes.

In Conclusion …
Automated calculation of readability is an approximation at best. And the Readability Analyzer definitely reflects that. The rating it provides should be treated as a “guesstimate”. It doesn’t tell you the quality of the writing, and it certainly doesn’t tell you the quality of the mind behind the writing, but it does give you a feel for who the target audience might be.

For this post: “Readablity is fairly difficult (~ grade 11) ” :-)