keep the noise down

At OOPSLA 2008 I participated in a workshop on Photographing Conferences organized by Richard Gabriel and Kevin Sullivan (photo on the right).

The announcement described the workshop as follows:

Photographing a conference is not a matter of point and shoot. And it's not just about pix to share with friends and family - the time is ripe for both serious photojournalism to capture our community's leaders, its activities, and its human face and for the use of artistry to tell stories and get people thinking. In this workshop you will learn the basic technical and aesthetic techniques for good conference photography, and you will practice these techniques during OOPSLA. Work will be critiqued using a writers' workshop process to enable you to continue learning and improving after the workshop. Participants will be expected to attend a full-day of lectures and interactive learning activities as well as photograph Monday, Tuesday, and Wednesday with short, early morning writers' workshops on Tuesday, Wednesday, and Thursday.

And so we did. On the first day (Sunday) we sat in a dark room reviewing photoshop techniques. During the rest of the week we went around making photos of the event. Every morning we had photo review sessions starting at 6:30 AM!. I was thoroughly exhausted at the end of the week.

During the week I took about 800 photos. Today I finally got around to finishing the postprocessing. Only 39 made it to my flickr account, which you can see in my OOPSLA 2008 set.

steven kelly

A funny visual detail at OOPSLA 2008 were the white badge dispenser cords that all participants wore. On many of my shots they form clear white triangles with quite some impact on the composition. The white of the cords also turned out to be very useful for adjusting the white balance during postprocessing.

Despite my intention to practice making photos of scenes with multiple persons, my best shots, and therefore most of what I've published on flickr, are close-up portraits. So my photos of the conference probably do not give an impression of the event, but they should be useful in putting faces to some of the names in the community.

For me the workshop was a success. That is, it has improved my photography. The most learning I probably got from preparing for the workshop. The workshop program required participants to read the book Understanding Exposure by Bryan F. Peterson. As a long time photographer I had the illusion that I knew everything there was to know, but I decided to get the book anyway and even read it. It turned out I had a lot to learn about making good exposures, in particular, about making more conscious decisions about aperture and shutter speed depending on the story you're trying to tell with a shot. Thus, shunning the fully automatic modes of my camera, I've been shooting in manual or semi-automatic (aperture priority) since the summer of 2008 with practice runs at ICMT 2008 in Zurich and MODELS/SLE 2008 in Toulouse.

Shooting in manual mode is no panacea. It can produce better results since slightly shifting the camera may make the camera decide to make a different decision on the lighting. However, it also requires continuous awareness and adjusting, which has made me miss quite a few shots as well.

william cook

The other preparation for the workshop consisted in getting (partially as a birthday present) a telelens for better candid photography. Since the summer of 2007 I had been shooting mainly with a 50mm fixed objective. This lens provided much better optics than the 17-85mm zoom lens I originally got with my EOS 30D. With its 1.8 (later I got the 1.4) aperture it can deal with poor light conditions and provides a nice background blur in portraits.

However, a 50mm lens is not ideal for conferences. Speakers and audience don't take it too well if you are in their face. So I needed a tele-zoom lens, and one that would do well under the typically poor light conditions of conferences. After a lot of agonizing about the price, and with a little help from my friends, I finally couldn't resist the Canon EF 70-200mm 1:2.8 L IS USM, a monstrous lens of 1.5 kg, but with maximum aperture of 2.8 over the whole range and image stabilization, which should help three stops. I got the lens just a few days before I left to Nashville, and it took some getting used to. But I can't do without anymore.

danny groenewegen

Most importantly, the workshop legitimizes taking photos at conferences, which I had been doing already a bit before, but never quite seriously. With the OOPSLA experience I've overcome most of my hesitations. So I'll definitely take my camera again to future events.

jan heering

These days I am writing a book on ‘domain-specific language engineering’ for use in a master’s course at Delft University. The book is about the design and implementation of domain-specific languages, i.e. the definition of their syntax, static semantics, and code generators. But it also contains a dose of linguistic reflection, by studying the phenomenon of defining languages, and of defining languages for defining languages (which is called meta-modelling these days).

Writing chapters on syntax definition and modeling of languages, takes me back to my days as a PhD student at the University of Amsterdam. Our quarters were in the university building at theWatergraafsmeer, which was connected to the CWI building via a bridge. Since the ASF+SDF group of Paul Klint was divided over the two locations, meetings required a walk to the other end of the building. So, I would regularly wander to the CWI part of the building to chat.

[While the third application we learned to use in our Unix course in 1989 was talk, with which one could synchronously talk with someone else on the internet (the first application was probably csh and the second email), face-to-face meetings were still the primary mode of communication; as opposed to the use of IRC to talk to one’s officemate.]

Often I would look into Jan Heering’s office to say hi, and more often than not would end up spending the rest of the afternoon discussing research and meta-research.

One of the recurring topics in these conversations was the importance of examples. Jan was fascinated by the notion of ‘programming by example’, i.e. deriving a program from a bunch of examples of its expected behaviour, instead of a rigorous and complete definition for all cases. But the other use of examples was for validation, a word I didn’t learn until long after writing my thesis.

The culture of the day (and probably location?) was heavily influenced by mathematics and theoretical computer science. The game was the definition, preferably algebraically, of the artifacts of interest, and then, possibly proving interesting properties. The application minded would actually implement stuff. As a language engineer I was mostly interested in making languages with cool features. The motivation for these features was often highly abstract. The main test example driving much of the work on the ASF+SDF MetaEnvironment was creating an interactive environment for the Pico language (While with variable declarations). The idea being that once an environment for Pico was realized, creating one for a more realistic language would be a matter scaling up the Pico definition (mere engineering). Actually making a language (implementation) and using that to write programs would be a real test. To be fair, there were specifications of larger languages undertaken, such as ones of (mini-) ML [8] and Pascal [6]. As a student I had developed a specification of the syntax and static semantics of the object-oriented programming language Eiffel [16], but that was so big it was not usable at the Sun workstations we had at that time.

Time and again, Jan Heering would stress the importance of real examples to show the relevance of a technique and/or to discover the requirements for a design. While I thought it was a cool idea, I didn’t have examples. At least not to sell the design and implementation of SDF2, the syntax definition formalism that turned out to be the main contribution of my PhD thesis [20].

Read more

language engineers

If you're doing research into domain-specific languages, model-driven engineering, or program generation, your agenda for the coming months is set. Early October the three main conferences on these topics are co-located in Denver. The deadlines are somewhat spread, so you should be able to submit a paper to each conference:

May 10: Model Driven Engineering Languages and Systems (MODELS'09)

May 18: Generative Programming and Component Engineering (GPCE'09)

July 10: Software Language Engineering (SLE 2009)

I'm looking forward to your submission, and to meeting you in Denver.

2009

Research challenge for 2009: trust.

As mentionted before, we've been doing some real parsing research to better support parsers for extensible languages. Parse table composition provides separate compilation for syntax components such that syntax extensions can be provided as plugins to a compiler for a base language. Due to various distractions last Summer I seem to have forgotten to blog about the paper that Martin Bravenboer and I got accepted at the first international conference on Software Language Engineering (which Martin was looking forward too).

M. Bravenboer and E. Visser. Parse Table Composition. Separate Compilation and Binary Extensibility of Grammars. In D. Gasevic and E. van Wyk, editors, First International Conference on Software Language Engineering (SLE 2008). To appear in Lecture Notes in Computer Science, Heidelberg, 2009. Springer. [pdf]

submittted

Abstract: Module systems, separate compilation, deployment of binary components, and dynamic linking have enjoyed wide acceptance in programming languages and systems. In contrast, the syntax of languages is usually defined in a non-modular way, cannot be compiled separately, cannot easily be combined with the syntax of other languages, and cannot be deployed as a component for later composition. Grammar formalisms that do support modules use whole program compilation.

Current extensible compilers focus on source-level extensibility, which requires users to compile the compiler with a specific configuration of extensions. A compound parser needs to be generated for every combination of extensions. The generation of parse tables is expensive, which is a particular problem when the composition configuration is not fixed to enable users to choose language extensions.

In this paper we introduce an algorithm for parse table composition to support separate compilation of grammars to parse table components. Parse table components can be composed (linked) efficiently at runtime, i.e. just before parsing. While the worst-case time complexity of parse table composition is exponential (like the complexity of parse table generation itself), for realistic language combination scenarios involving grammars for real languages, our parse table composition algorithm is an order of magnitude faster than computation of the parse table for the combined grammars.

The experimental parser generator is available online.