The best part was when the woman made a funny sound when she realized all five orders of mango pillows were for our table. That and the free tea hook-up. Thanks Daniel!
Lightning
While the aforementioned ice cream sandwiches were being made, this is what was going on outside:
Click the thumbnails for the full images. I managed to get some nice pictures with a tripod, one second exposures, and some persistence. As the storm passed and the sun set, this happened:
Edit 07/26: Katkam (no relation) pwned me due to vantage point: sunset, fireworks.
I scream. You scream.
What I’ve been doing lately
Mobile photo blogging
I’ve been a little lax in blogging lately, and I realized it was because I always gel like I don’t have enough to say to merit a blog post. Then in a fit of procrastination I signed up for Twitter. It seems as though the short character limit and ability to quickly post a picture are right in lone with the amount of info I feel like sharing on a day to day basis. However, because a lot of people who read this blog seem to be allergic to social networking (you know who you are), my tweets aren’t reaching very many people.
So to remedy thus situation I found an app that wil enable me to mobile blog and quickly add a picture, and I did it all by myself! (Greg is now probably rolling his eyes.) Anyways, here is my first mobile blog. If you’re reading this then that means it worked! I’m attaching a picture just to see whether that works too.
Custom classes in Docbook to HTML conversion
Maybe I should have a tag for “boring technical notes that I’m writing so others can Google them later”.
Anyway… if you’re converting a Docbook document to HTML, and want customized classes on elements (so you can hit them with CSS), first create a custom XSL style for the document (and use with xmlto -m
).
Then suppose you have <code language="html">
in the Docbook and want that to have classes html
and xml
to hold on to in the resulting HTML. Add this:
<xsl:template match="code[@language = 'html']" mode="class.value">
html xml
</xsl:template>
The match
can be any XSL matching pattern. The contents can also be a <xsl:value-of>
if you want to do something more advanced.
Maybe it’s because I’m an XSL newb, but I haven’t seen this explained nicely anywhere else.
CMPT 383, or “Why I Hate Ted”
As many of you know, one of the goals for my study leave has been to prepare to teach CMPT 383, Comparative Programming Languages. The calendar says this course is:
Various concepts and principles underlying the design and use of modern programming languages are considered in the context of procedural, object-oriented, functional and logic programming languages. Topics include data and control structuring constructs, facilities for modularity and data abstraction, polymorphism, syntax, and formal semantics.
I took a similar course in my undergrad, and I think it was really useful in helping me see the broader picture of what programming is.
I have been thinking about the course off-and-on for more than a year. I had been forming a pretty solid picture of what the course I teach would look like and things were going well, despite never having devoted any specific time to it or really writing anything down.
Then I talked to Ted. Ted has taught the course before, and has thought a lot about it. His thoughts on the course differed from mine. In particular, he opined that “logic programming is dead, so why teach it?” (Okay, maybe that’s not a direct quote, but that’s what I heard.) So that leaves functional programming as the only new paradigm worth talking about.
He also convinced me that covering too many languages in the single course puts students into a situation of too many trees, not enough forest. (That is, they get lost in syntax and don’t appreciate the core differences between languages.)
Basically Ted did the most annoying thing in the world: he disagreed with me and he was right.
But, there is a lot of stuff that I hadn’t considered before, but might be worth talking about:
- Type systems: static/dynamic, strong/weak, built-in data types, OO (or not), type inference, etc.
- Execution/compilation environment: native code generation, JIT compilers, virtual machines, language translation (e.g. Alice → Java → execution), etc.
So, what the hell do I do with all of that? Any ideas how to put all of that together into a coherent course that students can actually enjoy?
Wikipedia-based Machine Translation
I have been pondering this for a while and thought I might as well throw it in a blog entry…
Wikipedia is, of course, a massive collection of mostly-correct information. The information there isn’t fundamentally designed to be machine readable (unlike the semantic web stuff), but there are some structures that allow data to be extracted automatically. My favourite example is the {{coord}} template allowing the Wikipedia layer in Google Earth.
The part of Wikipedia pages that recently caught my eye is the “other languages” section on the left of every page. I’d be willing to bet that these interwiki links form the largest translation database that exists anywhere.
Take the entry for “Lithography” entry as a moderately-obscure example. On the left of that page, we can read off that the German word for lithography is “Lithografie”, the Urdu word is “سنگی طباعت”, and 34 others. Sure, some of the words might literally be “lithograph” or “photolithography”, but that’s not the worst thing ever. All of this can be mechanically discovered by parsing the wikitext.
Should it not be possible to do some good machine translation starting with that huge dictionary? Admittedly, I know approximately nothing about machine translation. I know there are still gobs of problems when it comes to grammar and ambiguity, but a good dictionary of word and phrase translations has to count for something. The “disambiguation” pages could probably help with the ambiguity problem too.
I’d guess that even this would produce a readable translation: (1) chunk source text into the longest possible page titles (e.g. look at “spontaneous combustion”, not “spontaneous” and “combustion” separately), (2) apply whatever grammar-translation rules you have lying around, (3) literally translate each chunk with the Wikipedia “other language” article titles, and (4) if there’s no “other language” title, fall back to any other algorithm.
I can’t believe this is a new idea, but a half-hearted search in Google and CiteSeer turned up nothing. Now it’s off my chest. Anybody who knows anything about machine translation should feel free to tell me why I’m wrong.