Publications by Year

  • PACMPL 4(OOPSLA) 2020 [pdf, doi, bib, researchr, ]
    There is a large gap between the specification of type systems and the implementation of their type checkers, which impedes reasoning about the soundness of the type checker with respect to the specification. A vision to close this gap is to automatically obtain type checkers from declarative programming language specifications. This moves the burden of proving correctness from a case-by-case basis for concrete languages to a single correctness proof for the specification language. This vision is obstructed by an aspect common to all programming languages: name resolution. Naming and scoping are pervasive and complex aspects of the static semantics of programming languages. Implementations of type checkers for languages with name binding features such as modules, imports, classes, and inheritance interleave collection of binding information (i.e., declarations, scoping structure, and imports) and querying that information. This requires scheduling those two aspects in such a way that query answers are stable—i.e., they are computed only after all relevant binding structure has been collected. Type checkers for concrete languages accomplish stability using language-specific knowledge about the type system. In this paper we give a language-independent characterization of necessary and sufficient conditions to guarantee stability of name and type queries during type checking in terms of critical edges in an incomplete scope graph. We use critical edges to give a formal small-step operational semantics to a declarative specification language for type systems, that achieves soundness by delaying queries that may depend on missing information. This yields type checkers for the specified languages that are sound by construction—i.e., they schedule queries so that the answers are stable, and only accept programs that are name- and type-correct according to the declarative language specification. We implement this approach, and evaluate it against specifications of a small module and record language, as well as subsets of Java and Scala.
  • Programming 4(3) 2020 [pdf, doi, bib, researchr, ]
    Context: Compilation time is an important factor in the adaptability of a software project. Fast recompilation enables cheap experimentation with changes to a project, as those changes can be tested quickly. Separate and incremental compilation has been a topic of interest for a long time to facilitate fast recompilation. Inquiry: Despite the benefits of an incremental compiler, such compilers are usually not the default. This is because incrementalization requires cross-cutting, complicated, and error-prone techniques such as dependency tracking, caching, cache invalidation, and change detection. Especially in compilers for languages with cross-module definitions and integration, correctly and efficiently implementing an incremental compiler can be a challenge. Retrofitting incrementality into a compiler is even harder. We address this problem by developing a compiler design approach that reuses parts of an existing non-incremental compiler to lower the cost of building an incremental compiler. It also gives an intuition into compiling difficult-to-incrementalize language features through staging. Approach: We use the compiler design approach presented in this paper to develop an incremental com- piler for the Stratego term-rewriting language. This language has a set of features that at first glance look incompatible with incremental compilation. Therefore, we treat Stratego as our critical case to demonstrate the approach on. We show how this approach decomposes the original compiler and has a solution to com- pile Stratego incrementally. The key idea on which we build our incremental compiler is to internally use an incremental build system to wire together the components we extract from the original compiler. Knowledge: The resulting compiler is already in use as a replacement of the original whole-program compiler. We find that the incremental build system inside the compiler is a crucial component of our approach. This allows a compiler writer to think in multiple steps of compilation, and combine that into a incremental compiler almost effortlessly. Normally, separate compilation à la C is facilitated by an external build system, where the programmer is responsible for managing dependencies between files. We reuse an existing sound and optimal incremental build system, and integrate its dependency tracking into the compiler. Grounding: The incremental compiler for Stratego is available as an artefact along with this article. We evaluate it on a large Stratego project to test its performance. The benchmark replays edits to the Stratego project from version control. These benchmarks are part of the artefact, packaged as a virtual machine image for easy reproducibility. Importance: Although we demonstrate our design approach on the Stratego programming language, we also describe it generally throughout this paper. Many currently used programming languages have a compiler that is much slower than necessary. Our design provides an approach to change this, by reusing an existing compiler and making it incremental within a reasonable amount of time.
  • JCL (JVLC) 57 2020 [pdf, doi, bib, researchr, ]
    Data-flow analysis is the static analysis of programs to estimate their approximate run-time behavior or approximate intermediate run-time values. It is an integral part of modern language specifications and compilers. In the specification of static semantics of programming languages, the concept of data-flow allows the description of well-formedness such as definite assignment of a local variable before its first use. In the implementation of compiler back-ends, data-flow analyses inform optimizations. Data-flow analysis has an established theoretical foundation. What lags behind is implementations of data-flow analysis in compilers, which are usually ad-hoc. This makes such implementations difficult to extend and maintain. In previous work researchers have proposed higher-level formalisms suitable for whole-program analysis in a separate tool, incremental analysis within editors, or bound to a specific intermediate representation. In this paper, we present FlowSpec, an executable formalism for specification of data-flow analysis. FlowSpec is a domain-specific language that enables direct and concise specification of data-flow analysis for programming languages, designed to express flow-sensitive, intra-procedural analyses. We define the formal semantics of FlowSpec in terms of monotone frameworks. We describe the design of FlowSpec using examples of standard analyses. We also include a description of our implementation of FlowSpec. In a case study we evaluate FlowSpec with the static analyses for Green-Marl, a domain-specific programming language for graph analytics.
  • There is a large gap between the specification of type systems and the implementation of their type checkers, which impedes reasoning about the soundness of the type checker with respect to the specification. A vision to close this gap is to automatically obtain type checkers from declarative programming language specifications. This moves the burden of proving correctness from a case-by-case basis for concrete languages to a single correctness proof for the specification language. This vision is obstructed by an aspect common to all programming languages: name resolution. Naming and scoping are pervasive and complex aspects of the static semantics of programming languages. Implementations of type checkers for languages with name binding features such as modules, imports, classes, and inheritance interleave collection of binding information (i.e., declarations, scoping structure, and imports) and querying that information. This requires scheduling those two aspects in such a way that query answers are stable---i.e., they are computed only after all relevant binding structure has been collected. Type checkers for concrete languages accomplish stability using language-specific knowledge about the type system. In this paper we give a language-independent characterization of necessary and sufficient conditions to guarantee stability of name and type queries during type checking in terms of critical edges in an incomplete scope graph. We use critical edges to give a formal small-step operational semantics to a declarative specification language for type systems, that achieves soundness by delaying queries that may depend on missing information. This yields type checkers for the specified languages that are sound by construction---i.e., they schedule queries so that the answers are stable, and only accept programs that are name- and type-correct according to the declarative language specification. We implement this approach, and evaluate it against specifications of a small module and record language, as well as subsets of Java and Scala.
  • Programming 2020 [pdf, doi, bib, researchr, ]
    Web applications are ideal for implementing information systems; they can organize and persist the data in a database, do not require installation on client machines, and can be instantly updated everywhere. However, web programming is complex due to its heterogeneous nature, causing web frameworks to suffer from insufficient or leaky abstraction, weak static consistency checking, and security features that are not enforced. We developed the WebDSL web programming language, which supports direct expression of intent, strong static consistency checking, linguistic abstractions for web programming concerns, and automatically enforces security features for web applications. We have used WebDSL for over 10 years to create information systems for academic workflows with thousands of users. Based on our experiences with these applications, we improved the WebDSL compiler and runtime to increase robustness, performance, and security of applications. In this experience report, we reflect on the lessons learned and improvements made to the language runtime.
  • CPP 2020 [pdf, doi, bib, researchr, ]
    An intrinsically-typed definitional interpreter is a concise specification of dynamic semantics, that is executable and type safe by construction. Unfortunately, scaling intrinsically-typed definitional interpreters to more complicated object languages often results in definitions that are cluttered with manual proof work. For linearly-typed languages (including session-typed languages) one has to prove that the interpreter, as well as all the operations on semantic components, treat values linearly. We present new methods and tools that make it possible to implement intrinsically-typed definitional interpreters for linearly-typed languages in a way that hides the majority of the manual proof work. Inspired by separation logic, we develop reusable and composable abstractions for programming with linear operations using dependent types. Using these abstractions, we define interpreters for linear lambda calculi with strong references, concurrency, and session-typed communication in Agda.
  • SEFM 2020 [pdf, doi, bib, researchr, ]
    SDF3 is a syntax definition formalism that extends plain context-free grammars with features such as constructor declarations, declarative disambiguation rules, character-level grammars, permissive syntax, layout constraints, formatting templates, placeholder syntax, and modular composition. These features support the multi-purpose interpretation of syntax definitions, including derivation of type schemas for abstract syntax tree representations, scannerless generalized parsing of the full class of context-free grammars, error recovery, layout-sensitive parsing, parenthesization and formatting, and syntactic completion. This paper gives a high level overview of SDF3 by means of examples and provides a guide to the literature for further details.
  • SLE 2020 [pdf, doi, bib, researchr, ]
    The Stratego language supports program transformation by means of term rewriting with programmable rewriting strategies. Stratego's traversal primitives support concise definition of generic tree traversals. Stratego is a dynamically typed language because its features cannot be captured fully by a static type system. While dynamic typing makes for a flexible programming model, it also leads to unintended type errors, code that is harder to maintain, and missed opportunities for optimization. In this paper, we introduce a gradual type system for Stratego that combines the flexibility of dynamically typed generic programming, where needed, with the safety of statically declared and enforced types, where possible. To make sure that statically typed code cannot go wrong, all access to statically typed code from dynamically typed code is protected by dynamic type checks (casts). The type system is backwards compatible such that types can be introduced incrementally to existing Stratego programs. We formally define a type system for Core Gradual Stratego, discuss its implementation in a new type checker for Stratego, and present an evaluation of its impact on Stratego programs.
  • Zenodo 2020 [doi, bib, researchr, ]
    This is the artifact for the paper Gradually Typing Strategies, accepted at International Conference on Software Language Engineering.
  • Associativity and priority are well known techniques to disambiguate expression grammars. In recent work we develop a direct semantics for disambiguation by associativity and priority rules and prove that a safe and complete disambiguation relation produces a safe and complete disambiguation. The proof approach relies on a correspondence between disambiguation and term rewriting such that safety of disambiguation corresponds to termination of the rewrite system and completeness of disambiguation correspond to confluence of the rewrite system. In this extended abstract we illustrate that approach using diagrams.
  • PACMPL 3(OOPSLA) 2019 [pdf, bib, researchr]
  • A Research Agenda for Formal Methods in the Netherlands 2019 [doi, bib, researchr, ]
    Language workbenches support the high-level definition of (domain-specific) programming languages and the automatic derivation of implementations from such definitions. The mission of language workbench research is to increase the level of abstraction of language definitions and expand the range of tools that can be generated automatically from language definitions. In this note, I give an overview of research into language workbenches at TU Delft and the perspective of future research.
  • Second Workshop on Incremental Computing (IC 2019) 2019 [pdf, bib, researchr, ]
    We introduce a design approach for incremental compilers that we believe may be applicable to other languages. We demonstrate it on the critical case of Stratego, a term rewriting language with open extensibility features. After a brief overview of the open extensibility features, we show our compilation method, which is somewhere in between separate and incremental compilation. Our approach allows us to reuse almost all of the existing compiler while gaining great improvements in recompilation speed. We evaluate the new compiler with a benchmark on the version control history of a large Stratego project.
  • ECOOP 2019 [pdf, doi, bib, researchr, ]
    DynSem is a domain-specific language for concise specification of the dynamic semantics of programming languages, aimed at rapid experimentation and evolution of language designs. To maintain a short definition-to-execution cycle, DynSem specifications are meta-interpreted. Meta-interpretation introduces runtime overhead that is difficult to remove by using interpreter optimization frameworks such as the Truffle/Graal Java tools; previous work has shown order-of-magnitude improvements from applying Truffle/Graal to a meta-interpreter, but this is still far slower than what can be achieved with a language-specific interpreter. In this paper, we show how specifying the meta-interpreter using scope graphs, which encapsulate static name binding and resolution information, produces much better optimization results from Truffle/Graal. Furthermore, we identify that JIT compilation is hindered by large numbers of calls between small polymorphic rules and we introduce rule cloning to derive larger monomorphic rules at run time as a countermeasure. Our contributions improve the performance of DynSem-derived interpreters to within an order of magnitude of a handwritten language-specific interpreter.
  • ECOOP 2019 [pdf, doi, bib, researchr, ]
    Editor services assist programmers to more effectively write and comprehend code. Implementing editor services correctly is not trivial. This paper focuses on the specification of semantic editor services, those that use the semantic model of a program. The specification of refactorings is a common subject of study, but many other semantic editor services have received little attention. We propose a language-parametric approach to the definition of semantic editor services, using a declarative specification of the static semantics of the programming language, and constraint solving. Editor services are specified as constraint problems, and language specifications are used to ensure correctness. We describe our approach for the following semantic editor services: reference resolution, find usages, goto subclasses, code completion, and the extract definition refactoring. We do this in the context of Statix, a constraint language for the specification of type systems. We investigate the specification of editor services in terms of Statix constraints, and the requirements these impose on a suitable solver.
  • OOPSLA 2019 [pdf, doi, bib, researchr, ]
    New programming languages often lack good IDE support, as developing advanced semantic editor services takes additional effort. In previous work we discussed the operational requirements of a constraint solver that leverages the declarative type system specification of a language to provide language-parametric semantic editor services. In this work we describe the implementation of our solver as a two stage process: inference and search. An editor-service specific search strategy determines how and where the search is conducted, and when it terminates. We are currently implementing and evaluating this idea.
  • Second Workshop on Incremental Computing (IC 2019) 2019 [pdf, bib, researchr, ]
    PIE is precise, as dependencies of build tasks are exactly tracked using dynamic dependencies, enabling correct and minimal incremental builds. PIE is efficient, only checking and executing tasks that have been affected by a change. Finally, PIE is expressive, as build engineers write their build scripts in a full-fledged programming language, without having to resort to workarounds or complicated design patterns.
  • OOPSLA 2019 [pdf, doi, bib, researchr, ]
    Symbolic execution is a technique for automatic software validation and verification. New symbolic executors regularly appear for both existing and new languages and such symbolic executors are generally manually (re)implemented each time we want to support a new language. We propose to automatically generate symbolic executors from language definitions, and present a technique for mechanically (but as yet, manually) deriving a symbolic executor from a definitional interpreter. The idea is that language designers define their language as a monadic definitional interpreter, where the monad of the interpreter defines the meaning of branch points. Developing a symbolic executor for a language is a matter of changing the monadic interpretation of branch points. In this paper, we illustrate the technique on a language with recursive functions and pattern matching, and use the derived symbolic executor to automatically generate test cases for definitional interpreters implemented in our defined language.
  • Technical report UU-CS-2019-004, Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands, 2019 [pdf, doi, bib, researchr, ]
    On September 3 and 4, 2018, we organized a meeting on formal methods research in the Netherlands. Goal of the meeting was to create a Dutch formal methods community, to increase awareness of each other’s activities, and to find common grounds for collaborations. All researchers working on formal methods in the Netherlands were invited to contribute a 2-page abstract with their vision on the future of formal methods research. This document bundles these visions.
  • Programming 2(3) 2018 [pdf, doi, bib, researchr, ]
    Context Context-free grammars are widely used for language prototyping and implementation. They allow formalizing the syntax of domain-specific or general-purpose programming languages concisely and declaratively. However, the natural and concise way of writing a context-free grammar is often ambiguous. Therefore, grammar formalisms support extensions in the form of declarative disambiguation rules to specify operator precedence and associativity, solving ambiguities that are caused by the subset of the grammar that corresponds to expressions. Inquiry Implementing support for declarative disambiguation within a parser typically comes with one or more of the following limitations in practice: a lack of parsing performance, or a lack of modularity (i.e., disallowing the composition of grammar fragments of potentially different languages). The latter subject is generally addressed by scannerless generalized parsers. We aim to equip scannerless generalized parsers with novel disambiguation methods that are inherently performant, without compromising the concerns of modularity and language composition. Approach In this paper, we present a novel low-overhead implementation technique for disambiguating deep associativity and priority conflicts in scannerless generalized parsers with lightweight data-dependency. Knowledge Ambiguities with respect to operator precedence and associativity arise from combining the various operators of a language. While shallow conflicts can be resolved efficiently by one-level tree patterns, deep conflicts require more elaborate techniques, because they can occur arbitrarily nested in a tree. Current state-of-the-art approaches to solving deep priority conflicts come with a severe performance overhead. Grounding We evaluated our new approach against state-of-the-art declarative disambiguation mechanisms. By parsing a corpus of popular open-source repositories written in Java and OCaml, we found that our approach yields speedups of up to 1.73x over a grammar rewriting technique when parsing programs with deep priority conflicts—with a modest overhead of 1–2 % when parsing programs without deep conflicts. Importance A recent empirical study shows that deep priority conflicts are indeed wide-spread in real-world programs. The study shows that in a corpus of popular OCaml projects on Github, up to 17 % of the source files contain deep priority conflicts. However, there is no solution in the literature that addresses efficient disambiguation of deep priority conflicts, with support for modular and composable syntax definitions.
  • Programming 2(3) 2018 [pdf, doi, bib, researchr, ]
    Context. Software development pipelines are used for automating essential parts of software engineering processes, such as build automation and continuous integration testing. In particular, interactive pipelines, which process events in a live environment such as an IDE, require timely results for low-latency feedback, and persistence to retain low-latency feedback between restarts. Inquiry. Developing an incrementalized and persistent version of a pipeline is one way to reduce feedback latency, but requires implementation of dependency tracking, cache invalidation, and other complicated and error-prone techniques. Therefore, interactivity complicates pipeline development if timeliness and persistence become responsibilities of the pipeline programmer, rather than being supported by the underlying system. Systems for programming incremental and persistent pipelines exist, but do not focus on ease of development, requiring a high degree of boilerplate, increasing development and maintenance effort. Approach. We develop Pipelines for Interactive Environments (PIE), a Domain-Specific Language (DSL), API, and runtime for developing interactive software development pipelines, where ease of development is a focus. The PIE DSL is a statically typed and lexically scoped language. PIE programs are compiled to programs implementing the API, which the PIE runtime executes in an incremental and persistent way. Knowledge. PIE provides a straightforward programming model that enables direct and concise expression of pipelines without boilerplate, reducing the development and maintenance effort of pipelines. Compiled pipeline programs can be embedded into interactive environments such as code editors and IDEs, enabling timely feedback at a low cost. Grounding. Compared to the state of the art, PIE reduces the code required to express an interactive pipeline by a factor of 6 in a case study on syntax-aware editors. Furthermore, we evaluate PIE in two case studies of complex interactive software development scenarios, demonstrating that PIE can handle complex interactive pipelines in a straightforward and concise way. Importance. Interactive pipelines are complicated software artifacts that power many important systems such as continuous feedback cycles in IDEs and code editors, and live language development in language workbenches. New pipelines, and evolution of existing pipelines, is frequently necessary. Therefore, a system for easily developing and maintaining interactive pipelines, such as PIE, is important.
  • PACMPL 2(OOPSLA) 2018 [pdf, doi, bib, researchr, ]
    Scope graphs are a promising generic framework to model the binding structures of programming languages, bridging formalization and implementation, supporting the definition of type checkers and the automation of type safety proofs. However, previous work on scope graphs has been limited to simple, nominal type systems. In this paper, we show that viewing scopes as types enables us to model the internal structure of types in a range of non-simple type systems (including structural records and generic classes) using the generic representation of scopes. Further, we show that relations between such types can be expressed in terms of generalized scope graph queries. We extend scope graphs with scoped relations and queries. We introduce Statix, a new domain-specific meta-language for the specification of static semantics, based on scope graphs and constraints. We evaluate the scopes as types approach and the Statix design in case studies of the simply-typed lambda calculus with records, System F, and Featherweight Generic Java.
  • PACMPL 2(POPL) 2018 [pdf, doi, bib, researchr, ]
    A definitional interpreter defines the semantics of an object language in terms of the (well-known) semantics of a host language, enabling understanding and validation of the semantics through execution. Combining a definitional interpreter with a separate type system requires a separate type safety proof. An alternative approach, at least for pure object languages, is to use a dependently-typed language to encode the object language type system in the definition of the abstract syntax. Using such intrinsically-typed abstract syntax definitions allows the host language type checker to verify automatically that the interpreter satisfies type safety. Does this approach scale to larger and more realistic object languages, and in particular to languages with mutable state and objects? In this paper, we describe and demonstrate techniques and libraries in Agda that successfully scale up intrinsically-typed definitional interpreters to handle rich object languages with non-trivial binding structures and mutable state. While the resulting interpreters are certainly more complex than the simply-typed λ-calculus interpreter we start with, we claim that they still meet the goals of being concise, comprehensible, and executable, while guaranteeing type safety for more elaborate object languages. We make the following contributions: (1) A dependent-passing style technique for hiding the weakening of indexed values as they propagate through monadic code. (2) An Agda library for programming with scope graphs and frames, which provides a uniform approach to dealing with name binding in intrinsically-typed interpreters. (3) Case studies of intrinsically-typed definitional interpreters for the simply-typed λ-calculus with references (STLC+Ref) and for a large subset of Middleweight Java (MJ).
  • SLE 2018 [pdf, doi, bib, researchr, ]
    We present a tool architecture that supports migrating custom domain-specific language (DSL) implementations to a language workbench. We demonstrate an implementation of this architecture for models in the domains of defining component interfaces (IDL) and modeling system behavior (OIL) which are developed and used at a digital printer manufacturing company. Increasing complexity and the lack of DSL syntax and IDE support for existing implementations in Python based on XML syntax hindered their evolution and adoption. A reimplementation in Spoofax using modular language definition enables composition between IDL and OIL and introduces more concise DSL syntax and IDE support. The presented tool supports migrating to new implementations while being backward compatible with existing syntax and related tooling.
  • PPPJ 2018 [pdf, doi, bib, researchr, ]
    DynSem is a domain-specific language for concise specification of the dynamic semantics of programming languages, aimed at rapid experimentation and evolution of language designs. DynSem specifications can be executed to interpret programs in the language under development. To enable fast turnaround during language development, we have developed a meta-interpreter for DynSem specifications, which requires minimal processing of the specification. In addition to fast development time, we also aim to achieve fast run times for interpreted programs. In this paper we present the design of a meta-interpreter for DynSem and report on experiments with JIT compiling the application of the meta-interpreter on the Graal VM. By interpreting specifications directly, we have minimal compilation overhead. By specializing pattern matches, maintaining call-site dispatch chains and using native control-flow constructs we gain significant run-time performance. We evaluate the performance of the meta-interpreter when applied to the Tiger language specification running a set of common benchmark programs. Specialization enables the Graal VM to JIT compile the meta-interpreter giving speedups of up to factor 15 over running on the standard Oracle Java VM.
  • SLE 2018 [pdf, doi, bib, researchr, ]
    In layout-sensitive languages, the indentation of an expression or statement can influence how a program is parsed. While some of these languages (e.g., Haskell and Python) have been widely adopted, there is little support for software language engineers in building tools for layout-sensitive languages. As a result, parsers, pretty-printers, program analyses, and refactoring tools often need to be handwritten, which decreases the maintainability and extensibility of these tools. Even state-of-the-art language workbenches have little support for layout-sensitive languages, restricting the development and prototyping of such languages. In this paper, we introduce a novel approach to declarative specification of layout-sensitive languages using layout declarations. Layout declarations are high-level specifications of indentation rules that abstract from low-level technicalities. We show how to derive an efficient layout-sensitive generalized parser and a corresponding pretty-printer automatically from a language specification with layout declarations. We validate our approach in a case-study using a syntax definition for the Haskell programming language, investigating the performance of the generated parser and the correctness of the generated pretty-printer against 22191 Haskell files.
  • SLE 2018 [pdf, doi, bib, researchr, ]
    To provide empirical evidence to what extent migration of business logic to an incremental computing language (ICL) is useful, we report on a case study on a learning management system. Our contribution is to analyze a real-life project, how migrating business logic to an ICL affects information system validatability, performance, and development effort. We find that the migrated code has better validatability; it is straightforward to establish that a program ‘does the right thing’. Moreover, the performance is better than the previous hand-written incremental computing solution. The effort spent on modeling business logic is reduced, but integrating that logic in the application and tuning performance takes considerable effort. Thus, the ICL separates the concerns of business logic and performance, but does not reduce effort.
  • ASE 2018 [pdf, doi, bib, researchr, ]
    Incremental build systems are essential for fast, reproducible software builds. Incremental build systems enable short feedback cycles when they capture dependencies precisely and selectively execute build tasks efficiently. A much overlooked feature of build systems is the expressiveness of the scripting language, which directly influences the maintainability of build scripts. In this paper, we present a new incremental build algorithm that allows build engineers to use a full-fledged programming language with explicit task invocation, value and file inspection facilities, and conditional and iterative language constructs. In contrast to prior work on incrementality for such programmable builds, our algorithm scales with the number of tasks affected by a change and is independent of the size of the software project being built. Specifically, our algorithm accepts a set of changed files, transitively detects and re-executes affected build tasks, but also accounts for new task dependencies discovered during building. We have evaluated the performance of our algorithm in a real-world case study and confirm its scalability.
  • WWW 2018 [pdf, doi, bib, researchr, ]
    Modern web applications are interactive. Reactive programming languages and libraries are the state-of-the-art approach for declara- tively specifying these interactive applications. However, programs written with these approaches contain error-prone boilerplate code for e ciency reasons. In this paper we present PixieDust, a declarative user-interface language for browser-based applications. PixieDust uses static de- pendency analysis to incrementally update a browser-DOM at run- time, without boilerplate code. We demonstrate that applications in PixieDust contain less boilerplate code than state-of-the-art ap- proaches, while achieving on-par performance.
  • Poster at SPLASH 2018 [pdf, bib, researchr, ]
    Code completion is an editor service that suggests keywords and identifiers that are relevant at the caret location in the editor, from which the user can choose to either insert one or continue typing. This reduces coding errors and aids in discovering the possibilities in a language or API. Providing a code completion editor service for a new programming language requires development effort in addition to the effort required for defining the language. The goal of this work is to automatically produce an intelligent editor-agnostic code completion editor service that is parameterized only by the declarative specification of the language. We implement our approach in the Spoofax language workbench, which enables language developers to provide a declarative specification of their new programming language. We will use the declarative specification to produce a platform-agnostic code completion editor service for Spoofax languages automatically
  • Poster at SPLASH 2018 [pdf, bib, researchr, ]
    Stratego is a transformation language based on term rewriting with programmable rewriting strategies. A program in Stratego consists of named rewrite rules and strategies. When definitions have the same name, they contribute to the same rule. This works across files, thereby allowing extensibility. Due to this distribution of rules over modules, the Stratego compiler has always been a whole program compiler. Large Stratego programs are slow to compile as a result. In this work we present our approach to incremental compilation of Stratego. The approach may be useful for incremental compilation of other languages with similar cross-file features.
  • darts 3(2) 2017 [pdf, doi, bib, researchr, ]
    This artifact is based on IceDust2, a data modeling language with derived values. The provided package is designed to support the claims of the companion paper: in particular, it allows users to compile and run IceDust2 specifications. Instructions for building the IceDust2 compiler from source in Spoofax are also provided.
  • DLS 2017 [pdf, doi, bib, researchr, ]
    Grace is a dynamic object oriented programming language designed to aid programming education. We present a formal model of and give an operational semantics for its object model and name resolution algorithm. Our main contributions are a systematic model of Grace’s name resolution using scope graphs, relating linguistic features to other languages, and an operationalization of this model in the form of an operational semantics which is readable and executable. The semantics are extensively tested against a reference Grace implementation.
  • SLE 2017 [pdf, doi, bib, researchr, ]
    We present FlowSpec, a declarative specification language for the domain of dataflow analysis. FlowSpec has declarative support for the specification of control flow graphs of programming languages, and dataflow analyses on these control flow graphs. We define the formal semantics of FlowSpec, which is rooted in Monotone Frameworks. We also discuss a prototype implementation of the language, built in the Spoofax Language Workbench. Finally, we evaluate the expressiveness and conciseness of the language with two case studies. These case studies are analyses for Green-Marl, an industrial, domain-specific language for graph processing. The first case study is a classical dataflow analysis, scaled to this full language. The second case study is a domain-specific analysis of Green-Marl.
  • ECOOP 2017 [pdf, doi, bib, researchr, ]
    Derived values are values calculated from base values. They can be expressed with views in relational databases, or with expressions in incremental or reactive programming. However, relational views do not provide multiplicity bounds, and incremental and reactive programming require significant boilerplate code in order to encode bidirectional derived values. Moreover, the composition of various strategies for calculating derived values is either disallowed, or not checked for producing derived values which will be consistent with the derived values they depend upon. In this paper we present IceDust2, an extension of the declarative data modeling language IceDust with derived bidirectional relations with multiplicity bounds and support for statically checked composition of calculation strategies. Derived bidirectional relations, multiplicity bounds, and calculation strategies all influence runtime behavior of changes to data, leading to hundreds of possible behavior definitions. IceDust2 uses a product-line based code generator to avoid explicitly defining all possible combinations, making it easier to reason about correctness. The type system allows only sound composition of strategies and guarantees multiplicity bounds. Finally, our case studies validate the usability of IceDust2 in applications.
  • SLE 2017 [pdf, doi, bib, researchr, ]
    Context-free grammars are suitable for formalizing the syntax of programming languages concisely and declaratively. Thus, such grammars are often found in reference manuals of programming languages, and used in language workbenches for language prototyping. However, the natural and concise way of writing a context-free grammar is often ambiguous. Safe and complete declarative disambiguation of operator precedence and associativity conflicts guarantees that all ambiguities arising from combining the operators of the language are resolved. Ambiguities can occur due to shallow conflicts, which can be captured by one-level tree patterns, and deep conflicts, which require more elaborate techniques. Approaches to solve deep priority conflicts include grammar transformations, which may result in large unambiguous grammars, or may require adapted parser technologies to include data-dependency tracking at parse time. In this paper we study deep priority conflicts "in the wild". We investigate the efficiency of grammar transformations to solve deep priority conflicts by using a lazy parse table generation technique. On top of lazily-generated parse tables, we define metrics, aiming to answer how often deep priority conflicts occur in real-world programs and to what extent programmers explicitly disambiguate programs themselves. By applying our metrics to a small corpus of popular open-source repositories we found that in OCaml, up to 17% of the source files contain deep priority conflicts.
  • darts 2(1) 2016 [pdf, doi, bib, researchr, ]
    Our paper introduces a systematic approach to the alignment of names in the static structure of a program, and memory layout and access during its execution. We develop a uniform memory model consisting of frames that instantiate the scopes in the scope graph of a program. This provides a language-independent correspondence between static scopes and run-time memory layout, and between static resolution paths and run-time memory access paths. The approach scales to a range of binding features, supports straightforward type soundness proofs, and provides the basis for a language-independent specification of sound reachability-based garbage collection. This Coq artifact showcases how our uniform model for memory layout in dynamic semantics provides structure to type soundness proofs. The artifact contains type soundness proofs mechanized in Coq for (supersets of) all languages in the paper. The type soundness proofs rely on a language-independent framework formalizing scope graphs and frame heaps.
  • ECOOP 2016 [pdf, doi, bib, researchr, ]
    Derived values are values calculated from base values. They can be expressed in object-oriented languages by means of getters calculating the derived value, and in relational or logic databases by means of (materialized) views. However, switching to a different calculation strategy (for example caching) in object-oriented programming requires invasive code changes, and the databases limit expressiveness by disallowing recursive aggregation. In this paper, we present IceDust, a data modeling language for expressing derived attribute values without committing to a calculation strategy. IceDust provides three strategies for calculating derived values in persistent object graphs: Calculate-on-Read, Calculate-on-Write, and Calculate-Eventually. We have developed a path-based abstract interpretation that provides static dependency analysis to generate code for these strategies. Benchmarks show that different strategies perform better in different scenarios. In addition we have conducted a case study that suggests that derived value calculations of systems used in practice can be expressed in IceDust.
  • Workshop on Live Programming Systems (LIVE) 2016 [pdf, bib, researchr, ]
    We would like to see live programming applied to language development, to getlive language development. With live language development, a language developer gets fast feed- back when they change their language, enabling experimentation with language design and development. In this paper, we describe what live language development is and why it is useful, and we analyze what is needed to achieve live language development. Moreover, we describe our work in progress in supporting live language development in the Spoofax language workbench.
  • ECOOP 2016 [pdf, doi, bib, researchr, ]
    Semantic specifications do not make a systematic connection between the names and scopes in the static structure of a program and memory layout, and access during its execution. In this paper, we introduce a systematic approach to the alignment of names in static semantics and memory in dynamic semantics, building on the scope graph framework for name resolution. We develop a uniform memory model consisting of frames that instantiate the scopes in the scope graph of a program. This provides a language-independent correspondence between static scopes and run-time memory layout, and between static resolution paths and run-time memory access paths. The approach scales to a range of binding features, supports straightforward type soundness proofs, and provides the basis for a language-independent specification of sound reachability-based garbage collection.
  • PEPM 2016 [pdf, doi, bib, researchr, ]
    In previous work, we introduced scope graphs as a formalism for describing program binding structure and performing name resolution in an AST-independent way. In this paper, we show how to use scope graphs to build static semantic analyzers. We use constraints extracted from the AST to specify facts about binding, typing, and initialization. We treat name and type resolution as separate building blocks, but our approach can handle language constructs -- such as record field access -- for which binding and typing are mutually dependent. We also refine and extend our previous scope graph theory to address practical concerns including ambiguity checking and support for a wider range of scope relationships. We describe the details of constraint generation for a model language that illustrates many of the interesting static analysis issues associated with modules and records.
  • GPCE 2016 [pdf, doi, bib, researchr, ]
    It is common practice to bootstrap compilers of programming languages. By using the compiled language to implement the compiler, compiler developers can code in their own high-level language and gain a large-scale test case. In this paper, we investigate bootstrapping of compiler-compilers as they occur in language workbenches. Language workbenches support the development of compilers through the application of multiple collaborating domain-specific meta-languages for defining a language's syntax, analysis, code generation, and editor support. We analyze the bootstrapping problem of language workbenches in detail, propose a method for sound bootstrapping based on fixpoint compilation, and show how to conduct breaking meta-language changes in a bootstrapped language workbench. We have applied sound bootstrapping to the Spoofax language workbench and report on our experience.
  • SCALA 2016 [pdf, doi, bib, researchr, ]
    In this paper, we report on our experience in teaching a course on concepts of programming languages at TU Delft based on Krishnamurthi's PAPL book with the definitional interpreter approach using Scala as meta-language and using the WebLab learning management system. In particular, we discuss our experience with encoding of definitional interpreters in Scala using case classes, pattern matching, and recursive functions; offering this material in the web-based learning management system WebLab; automated grading and feedback of interpreter submissions using unit tests; testing tests to force students to formulate tests, instead of just implementing interpreters; generation of tests based on a reference implementation to reduce the effort of producing unit tests; and the construction of a product line of interpreters in order to maximize reuse and consistency between reference implementations.
  • SLE 2016 [pdf, doi, bib, researchr, ]
    Principled syntactic code completion enables developers to change source code by inserting code templates, thus increasing developer efficiency and supporting language exploration. However, existing code completion systems are ad-hoc and neither complete nor sound. They are not complete and only provide few code templates for selected programming languages. They also are not sound and propose code templates that yield invalid programs when inserted.This paper presents a generic framework that automatically derives complete and sound syntactic code completion from the syntax definition of arbitrary languages. A key insight of our work is to provide an explicit syntactic representation for incomplete programs using placeholders. This enables us to address the following challenges for code completion separately: (i) completing incomplete programs by replacing placeholders with code templates, (ii) injecting placeholders into complete programs to make them incomplete, and (iii) introducing lexemes and placeholders into incorrect programs through error-recovery parsing to make them correct so we can apply one of the previous strategies. We formalize our framework and provide an implementation in the Spoofax Language Workbench.
  • ISoLA 2016 [pdf, doi, bib, researchr, ]
    Software is widely used, and society increasingly depends on its reliability. However, software has become so complex and it evolves so quickly that we fail to keep it under control. Therefore, we propose intents: fundamental laws that capture a software systems’ intended behavior (resilient, secure, safe, sustainable, etc.). The realization of this idea requires novel theories, algorithms, tools, and techniques to discover, express, verify, and evolve software intents. Thus, future software systems will be able to verify themselves that they meet their intents. Moreover, they will be able to respond to deviations from intents through self-correction. In this article we propose a research agenda, outlining which novel theories, algorithms and tools are required.
  • Grace in Spoofax: Readable Specification and Implementation in One
    Presented at GRACE 2016 2016 [bib, researchr]
  • Language Workbench Challenge (LWC@SLE) 2016 [pdf, bib, researchr, ]
    Language workbenches are tools that help language designers to design and implement (domain-specific) programming languages, aiming to produce a full featured programming environment from a high-level language description. A recent paper, resulting from a series of language workbench challenge workshops, describes a collection of benchmark problems for language workbench research [5]. In this paper, we describe solutions to two of these benchmark problems in the Spoofax Language Workbench [6], i.e. default formatting in Section 3 and skeleton editing in Section 4. In addition, we introduce a new benchmark problem — bootstrapping of meta-languages in a workbench — and describe the support for bootstrapping we developed for Spoofax in Section 2.
  • dagstuhl-reports 5(2) 2015 [pdf, doi, bib, researchr, ]
    This report documents the program and outcomes of Dagstuhl Seminar 15062 “Domain-Specific Languages”, which took place February 1–6, 2015. The seminar was motivated on the one hand by the high interest in domain-specific languages in academia and industry and on the other hand by the observation that the community is divided into largely disconnected subdisciplines (e.g., internal, external, visual, model-driven). The seminar included participants across these subdisciplines and included overview talks, technical talks, demos, discussion groups, and an industrial panel. This report collects the abstracts of talks and other activities at the seminar and summarizes the outcomes of the seminar.
  • SCP 97 2015 [pdf, doi, bib, researchr, ]
    In this essay, I argue that linguistic abstraction should be used systematically as a tool to capture our emerging understanding of domains of computation. Moreover, to enable that systematic application, we need to capture our understanding of the domain of linguistic abstraction itself in higher-level meta languages. The argument is illustrated with examples from the SDF, Stratego, Spoofax, and WebDSL projects in which I explore these ideas.
  • Comp. Lang., Syst. \& Struct. 44 2015 [pdf, doi, bib, researchr, ]
    Language workbenches are environments for simplifying the creation and use of computer languages. The annual Language Workbench Challenge (LWC) was launched in 2011 to allow the many academic and industrial researchers in this area an opportunity to quantitatively and qualitatively compare their approaches. We first describe all four LWCs to date, before focussing on the approaches used, and results generated, during the third LWC. We give various empirical data for ten approaches from the third LWC. We present a generic feature model within which the approaches can be understood and contrasted. Finally, based on our experiences of the existing LWCs, we propose a number of benchmark problems for future LWCs.
  • OOPSLA 2015 [pdf, doi, bib, researchr, ]
    Federated conferences such as SPLASH are complex organizations composed of many parts (co-located conferences, symposia, and workshops), and are put together by many different people and committees. Developing the website for such a conference requires a considerable effort, and is often reinvented for each edition of a conference using software that provides little to no support for the domain. In this paper, we give a high-level overview of the design of Conf.Researchr.Org, a domain-specific content management system developed to support the production of large conference web sites, which is being used for the federated conferences of ACM SIGPLAN.
  • ESOP 2015 [pdf, doi, bib, researchr, ]
    We describe a language-independent theory for name binding and resolution, suitable for programming languages with complex scoping rules including both lexical scoping and modules. We formulate name resolution as a two-stage problem. First a language-independent scope graph is constructed using language-specific rules from an abstract syntax tree. Then references in the scope graph are resolved to corresponding declarations using a language-independent resolution process. We introduce a resolution calculus as a concise, declarative, and languageindependent specification of name resolution. We develop a resolution algorithm that is sound and complete with respect to the calculus. Based on the resolution calculus we develop language-independent definitions of α-equivalence and rename refactoring. We illustrate the approach using a small example language with modules. In addition, we show how our approach provides a model for a range of name binding patterns in existing languages.
  • RTA 2015 [pdf, doi, bib, researchr, ]
    The formal semantics of a programming language and its implementation are typically separately defined, with the risk of divergence such that properties of the formal semantics are not properties of the implementation. In this paper, we present DynSem, a domain-specific language for the specification of the dynamic semantics of programming languages that aims at supporting both formal reasoning and efficient interpretation. DynSem supports the specification of the operational semantics of a language by means of statically typed conditional term reduction rules. DynSem supports concise specification of reduction rules by providing implicit build and match coercions based on reduction arrows and implicit term constructors. DynSem supports modular specification by adopting implicit propagation of semantic components from I-MSOS, which allows omitting propagation of components such as environments and stores from rules that do not affect those. DynSem supports the declaration of native operators for delegation of aspects of the semantics to an external definition or implementation. DynSem supports the definition of auxiliary meta-functions, which can be expressed using regular reduction rules and are subject to semantic component propagation. DynSem specifications are executable through automatic generation of a Java-based AST interpreter.
  • Technical report TUD-SERG-2015-006, Delft University of Technology, Software Engineering Research Group, 2015 [pdf, bib, researchr, ]
    We extend and combine two existing declarative formalisms, the scope graphs of Neron et al. and type constraint systems, to build a language-independent theory that can describe both name and type resolution for realistic languages with complex scope and typing rules. Unlike conventional static semantics presentations, our approach maintains a clear separation between scoping and typing concerns, while still be- ing able to handle language constructs, such as class field access, for which name and type resolution are necessarily intertwined. We define a constraint scheme that can express both typing and name binding constraints, and give a for- mal notion of constraint satisfiability together with a sound algorithm for finding solutions in important special cases. We describe the details of constraint generation for a model language that illustrates many of the interesting resolution issues associated with modules, classes, and records. Our constraint generator and solver have been implemented in the Spoofax Language Workbench.
  • Technical report TUD-SERG-2015-009, Software Engineering Research Group, Delft University of Technology, 2015 [pdf, bib, researchr, ]
    In previous work, we introduced scope graphs as a formalism for describing program binding structure and performing name resolution in an AST-independent way. In this paper, we show how to use scope graphs to build static semantic analyzers. We use constraints extracted from the AST to specify facts about binding, typing, and initialization. We treat name and type resolution as separate building blocks, but our approach can handle language constructs—such as record field access—for which binding and typing are mutually dependent. We also refine and extend our previous scope graph theory to address practical concerns including ambiguity checking and support for a wider range of scope relationships. We describe the details of constraint generation for a model language that illustrates many of the interesting static analysis issues associated with modules and records.
  • IEEE Software 31(5) 2014 [pdf, doi, bib, researchr, ]
    IDEs are essential for programming language developers, and state-of-the-art IDE support is mandatory for programming languages to be successful. Although IDE features for mainstream programming languages are typically implemented manually, this often isn't feasible for programming languages that must be developed with significantly fewer resources. The Spoofax language workbench is a platform for developing textual programming languages with state-of-the-art IDE support. Spoofax is a comprehensive environment that integrates syntax definition, name binding, type analysis, program transformation, code generation, and declarative specification of IDE components. It also provides high-level languages for each of these aspects. These languages are highly declarative, abstracting over the implementation of IDE features and letting engineers focus on language design.
  • OOPSLA 2014 [pdf, doi, bib, researchr, ]
    The realization of a language design requires multiple artifacts that redundantly encode the same information. This entails significant effort for language implementors, and often results in late detection of errors in language definitions. In this paper we present a proof-of-concept language designer's workbench that supports generation of IDEs, interpreters, and verification infrastructure from a single source. This constitutes a first milestone on the way to a system that fully automates language implementation and verification.
  • SLE 2014 [pdf, doi, bib, researchr, ]
    Object-oriented programming languages support concise navigation of relations represented by references. However, relations are not first-class citizens and bidirectional navigation is not supported. The relational paradigm provides first-class relations, but with bidirectional navigation through verbose queries. We present a systematic analysis of approaches to modeling and navigating relations. By unifying and generalizing the features of these approaches, we developed the design of a data modeling language that features first-class relations, n-ary relations, native multiplicities, bidirectional relations and concise navigation.
  • AOSD 2014 [pdf, doi, bib, researchr, ]
    Program generators and transformations are hard to implement correctly, because the implementation needs to generically describe how to construct programs, for example, using templates or rewrite rules. We apply dynamic analysis to program generators in order to support developers in finding bugs and identifying the source of the bug. Our analysis focuses on syntactic language constraints and checks that generated programs are syntactically well-formed. To retain a language's grammar as the unique specification of the language's syntax, we devised mechanisms to derive the analysis from the grammar. Moreover, we designed a run-time system to support the modular activation/deactivation of the analysis, so that generators do not require adaption. We have implemented the analysis for the Stratego term-rewriting language and applied it in case studies based on Spoofax and SugarJ.
  • AOSD 2014 [pdf, doi, bib, researchr, ]
    Effectively applying linguistic abstraction to emerging domains of computation requires the ability to rapidly develop software languages. However, a software language is a complex software system in its own right and can take significant effort to design and implement. We are currently investigating a radical separation of concerns in language definition by designing high-level declarative meta-languages specialized to the various concerns of language definition that can be used as the single source of production quality (incremental) semantic operations and as a model for reasoning about language properties.
  • AOSD 2014 [pdf, doi, bib, researchr, ]
    A key problem in metaprogramming and specifically in generative programming is to guarantee that generated code is well-formed with respect to the context-free and context-sensitive constraints of the target language. We propose typesmart constructors as a dynamic approach to enforcing the well-formedness of generated code. A typesmart constructor is a function that is used in place of a regular constructor to create values, but it may reject the creation of values if the given data violates some language-specific constraint. While typesmart constructors can be implemented individually, we demonstrate how to derive them automatically from a grammar, so that the grammar remains the sole specification of a language's syntax and is not duplicated. We have integrated support for typesmart constructors into the run-time system of Stratego to enforce usage of typesmart constructors implicitly whenever a regular constructor is called. We evaluate the applicability, performance, and usefulness of typesmart constructors for syntactic constraints in a compiler for MiniJava developed with Spoofax and in various language extensions of Java and Haskell implemented with SugarJ and SugarHaskell.
  • SCP 78(10) 2013 [pdf, doi, bib, researchr, ]
    Attribute grammars are a powerful specification paradigm for many language processing tasks, particularly semantic analysis of programming languages. Recent attribute grammar systems use dynamic scheduling algorithms to evaluate attributes on demand. In this paper, we show how to remove the need for a generator, by embedding a dynamic approach in a modern, object-oriented and functional programming language. The result is a small, lightweight attribute grammar library that is part of our larger Kiama language processing library. Kiama’s attribute grammar library supports a range of advanced features including cached, uncached, higher order, parameterised and circular attributes. Forwarding is available to modularise higher order attributes and decorators abstract away from the details of attribute value propagation. Kiama also implements new techniques for dynamic extension and variation of attribute equations. We use the Scala programming language because of its support for domain-specific notations and emphasis on scalability. Unlike generators with specialised notation, Kiama attribute grammars use standard Scala notations such as pattern-matching functions for equations, traits and mixins for composition and implicit parameters for forwarding. A benchmarking exercise shows that our approach is practical for realistic language processing.
  • SoSyM 12(1) 2013 [pdf, doi, bib, researchr, ]
    Data validation rules constitute the constraints that data input and processing must adhere to in addition to the structural constraints imposed by a data model. Web modeling tools do not make all types of data validation explicit in their models, hampering full code generation and model expressivity. Web application frameworks do not offer a consistent interface for data validation. In this paper, we present a solution for the integration of declarative data validation rules with user interface models in the domain of web applications, unifying syntax, mechanisms for error handling, and semantics of validation checks, and covering value well-formedness, data invariants, input assertions, and action assertions. We have implemented the approach in WebDSL, a domain-specific language for the definition of web applications.
  • This book covers DSL Design, Implementation and Use of DSL in detail. It consists of four parts. Part 1 introduces DSLs in general and discusses their advantages and drawbacks. It also defines important terms and concepts and introduces the case studies used in the most of the re-mainder of the book. Part 2 discusses the design of DSLs – independent of implementation techniques. It discusses seven design dimensions, explains a number of reusable language paradigms and points out a number of process-related issues. Part 3 provides details about the implementation of DSLs with lots of code. It uses three state-of-the-art but quite different language workbenches: Jet-Brains MPS, Eclipse Xtext and TU Delft’s Spoofax. Part 4 discusses the use of DSLs for requirements, architecture, implementation and product line engineering, as well as their roles as a developer utility and for implementing business logic.
  • SLE 2013 [pdf, doi, bib, researchr, ]
    IDEs depend on incremental name and type analysis for responsive feedback for large projects. In this paper, we present a language-independent approach for incremental name and type analysis. Analysis consists of two phases. The first phase analyzes lexical scopes and binding instances and creates deferred analysis tasks. A task captures a single name resolution or type analysis step. Tasks might depend on other tasks and are evaluated in the second phase. Incrementality is supported on file and task level. When a file changes, only this file is recollected and only those tasks are reevaluated, which are affected by the changes in the collected data. The analysis does neither re-parse nor re-traverse unchanged files, even if they are affected by changes in other files. We implemented the approach as part of the Spoofax Language Workbench and evaluated it for the WebDSL web programming language.
  • SLE 2013 [pdf, doi, bib, researchr, ]
    Language workbenches are tools that provide high-level mechanisms for the implementation of (domain-specific) languages. Language workbenches are an active area of research that also receives many contributions from industry. To compare and discuss existing language workbenches, the annual Language Workbench Challenge was launched in 2011. Each year, participants are challenged to realize a given domain-specific language with their workbenches as a basis for discussion and comparison. In this paper, we describe the state of the art of language workbenches as observed in the previous editions of the Language Workbench Challenge. In particular, we capture the design space of language workbenches in a feature model and show where in this design space the participants of the 2013 Language Workbench Challenge reside. We compare these workbenches based on a DSL for questionnaires that was realized in all workbenches.
  • ICMT 2013 [pdf, doi, bib, researchr, ]
    In modern Integrated Development Environments (IDEs), textual editors are interactive and can handle intermediate, incomplete, or otherwise erroneous texts while still providing editor services such as syntax highlighting, error marking, outline views, and hover help. In this paper, we present an approach for the robust synchronization of interactive textual and graphical editors. The approach recovers from errors during parsing and text-to-model synchronization, preserves textual and graphical layout in the presence of erroneous texts and models, and provides synchronized editor services such as selection sharing and navigation between editors. It was implemented for synchronizing textual editors generated by the Spoofax language workbench and graphical editors generated by the Graphical Modeling Framework.
  • TOPLAS 34(4) 2012 [pdf, doi, bib, researchr, ]
    Integrated development environments (IDEs) increase programmer productivity, providing rapid, interactive feedback based on the syntax and semantics of a language. Unlike conventional parsing algorithms, scannerless generalized-LR parsing supports the full set of context-free grammars, which is closed under composition, and hence can parse languages composed from separate grammar modules. To apply this algorithm in an interactive environment, this paper introduces a novel error recovery mechanism. Our approach is language-independent, and relies on automatic derivation of recovery rules from grammars. By taking layout information into consideration it can efficiently suggest natural recovery suggestions.
  • LDTA 2012 [pdf, doi, bib, researchr, ]
    Syntax discoverability has been a crucial advantage of structure editors for new users of a language. Despite this advantage, structure editors have not been widely adopted. Based on immediate parsing and analyses, modern textual code editors are also increasingly syntax-aware: structure and textual editors are converging into a new editing paradigm that combines text and templates. Current text-based language workbenches require redundant specification of the ingredients for a template-based editor, which is detrimental to the quality of syntactic completion, as consistency and completeness of the definition cannot be guaranteed. In this paper we describe the design and implementation of a specification language for syntax definition based on templates. It unifies the specification of parsers, unparsers and template-based editors. We evaluate the template language by application to two domain-specific languages used for tax benefits and mobile applications.
  • ASE 2012 [pdf, doi, bib, researchr, ]
    Evaluation of parse error recovery techniques is an open problem. The community lacks objective standards and methods to measure the quality of recovery results. This paper proposes an automated technique for recovery evaluation that offers a solution for two main problems in this area. First, a representative testset is generated by a mutation based fuzzing technique that applies knowledge about common syntax errors. Secondly, the quality of the recovery results is automatically measured using an oracle-based evaluation technique. We evaluate the validity of our approach by comparing results obtained by automated evaluation with results obtained by manual inspection. The evaluation shows a clear correspondence between our quality metric and human judgement.
  • Companion to the 27th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2011, part of SPLASH 2012, Tucson, AR, USA, October 19 - 26, 2012 2012 [pdf, doi, bib, researchr, ]
    In textual software languages, names are used to identify program elements such as variables, methods, and classes. Name analysis algorithms resolve names in order to establish references between definitions and uses of names. In this poster, we present the Spoofax Name Binding Language (NBL), a declarative meta-language for the specification of name binding and scope rules, which departs from the programmatic encodings of name binding provided by regular approaches. NBL aspires to become the universal language for name binding, which can be used next to BNF definitions in reference manuals, as well as serve the generation of implementations.
  • LDTA 2012 [pdf, doi, bib, researchr, ]
    The implementation of refactorings for new languages requires considerable effort from the language developer. We aim at reducing that effort by using language generic techniques. This paper focuses on behavior preservation, in particular the preservation of static name bindings. To detect name binding violations, we implement a technique that reuses the name analysis defined in the compiler front end. Some languages offer the possibility to access variables using qualified names. As a refinement to violation detection, we show that name analysis can be defined as a reusable traversal strategy that can be applied to restore name bindings by creating qualified names. These techniques offer an efficient and reliable solution; the semantics of the language is implemented only once, with the compiler being the single source of truth. We evaluate our approach by implementing a language generic rename refactoring, which we apply to two domain specific languages and a subset of the Java language.
  • OOPSLA 2012 [pdf, doi, bib, researchr, ]
    Software is rapidly moving from the desktop to the Web. The Web provides a generic user interface that allows ubiquitous access, instant collaboration, integration with other online services, and avoids installation and configuration on desktop computers. For software development, the Web presents a shift away from developer workstations as a silo, and has the promise of closer collaboration and improved feedback through innovations in Web-based interactive development environments (IDEs). Moving IDEs to the Web is not just a matter of porting desktop IDEs; a fundamental reconsideration of the IDE architecture is necessary in order to realize the full potential that the combination of modern IDEs and the Web can offer. This paper discusses research challenges and opportunities in this area, guided by a pilot study of a web IDE implementation.
  • SLE 2012 [pdf, doi, bib, researchr, ]
    In textual software languages, names are used to reference elements like variables, methods, classes, etc. Name resolution analyses these names in order to establish references between definition and use sites of elements. In this paper, we identify recurring patterns for name bindings in programming languages and introduce a declarative metalanguage for the specification of name bindings in terms of namespaces, definition sites, use sites, and scopes. Based on such declarative name binding specifications, we provide a language-parametric algorithm for static name resolution during compile-time. We discuss the integration of the algorithm into the Spoofax Language Workbench and show how its results can be employed in semantic editor services like reference resolution, constraint checking, and content completion.
  • JSC 46(2) 2011 [pdf, doi, bib, researchr, ]
    Modern web application development frameworks provide web application developers with high-level abstractions to improve their productivity. However, their support for static verification of applications is limited. Inconsistencies in an application are often not detected statically, but appear as errors at run-time. The reports about these errors are often obscure and hard to trace back to the source of the inconsistency. A major part of this inadequate consistency checking can be traced back to the lack of linguistic integration of these frameworks. Parts of an application are defined with separate domain-specific languages, which are not checked for consistency with the rest of the application. Examples include regular expressions, query languages and XML-based languages for definition of user interfaces. We give an overview and analysis of typical problems arising in development with frameworks for web application development, with Ruby on Rails, Lift and Seam as representatives. To remedy these problems, in this paper, we argue that domain-specific languages should be designed from the ground up with static verification and cross-aspect consistency checking in mind, providing linguistic integration of domain-specific sub-languages. We show how this approach is applied in the design of WebDSL, a domain-specific language for web applications, by examining how its compiler detects inconsistencies not caught by web frameworks, providing accurate and clear error messages. Furthermore, we show how this consistency analysis can be expressed with a declarative rule-based approach using the Stratego transformation language.
  • GPCE 2011 [pdf, doi, bib, researchr, ]
    Tool support is vital to the effectiveness of domain-specific languages. With language workbenches, domain-specific languages and their tool support can be generated from a combined, high-level specification. This paper shows how such a specification can be extended to describe a debugger for a language. To realize this, we introduce a meta-language for coordinating the debugger that abstracts over the complexity of writing a debugger by hand. We describe the implementation of a language-parametric infrastructure for debuggers that can be instantiated based on this specification. The approach is implemented in the Spoofax language workbench and validated through realistic case studies with the Stratego transformation language and the WebDSL web programming language.
  • OOPSLA 2011 [pdf, doi, bib, researchr, ]
    SugarJ is a Java-based programming language that provides extensible surface syntax, static analyses, and IDE support. SugarJ extensions are organized as libraries; conventional import statements suffice to activate and compose language extensions. We demonstrate how programmers can use SugarJ to modularly extend Java's syntax, semantic analyses and IDE support.
  • SLE 2011 [pdf, doi, bib, researchr, ]
    Metamodel evolution requires model migration. To correctly migrate models, evolution needs to be made explicit. Manually describing evolution is error-prone and redundant. Metamodel matching offers a solution by automatically detecting evolution, but is only capable of detecting primitive evolution steps. In practice, primitive evolution steps are jointly applied to form a complex evolution step, which has the same effect on a metamodel as the sum of its parts, yet generally has a different effect in migration. Detection of complex evolution is therefore needed. In this paper, we present an approach to reconstruct complex evolution between two metamodel versions, using a matching result as input. It supports operator dependencies and mixed, overlapping, and incorrectly ordered complex operator components. It also supports interference between operators, where the effect of one operator is partially or completely hidden from the target metamodel by other operators.
  • Companion to the 26th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2011, part of SPLASH 2011, Portland, OR, USA, October 22 - 27, 2011 2011 [pdf, doi, bib, researchr, ]
    SugarJ is a Java-based programming language that provides extensible surface syntax, static analyses, and IDE support. SugarJ extensions are organized as libraries; conventional import statements suffice to activate and compose language extensions. We illustrate how programmers can use SugarJ to modularly extend Java’s syntax, semantic analyses and IDE support.
  • GPCE 2011 [pdf, doi, bib, researchr, ]
    Large software projects consist of code written in a multitude of different (possibly domain-specific) languages, which are often deeply interspersed even in single files. While many proposals exist on how to integrate languages semantically and syntactically, the question of how to support this scenario in integrated development environments (IDEs) remains open: How can standard IDE services, such as syntax highlighting, outlining, or reference resolving, be provided in an extensible and compositional way, such that an open mix of languages is supported in a single file? Based on our library-based syntactic extension language for Java, SugarJ, we propose to make IDEs extensible by organizing editor services in editor libraries. Editor libraries are libraries written in the object language, SugarJ, and hence activated and composed through regular import statements on a file-by-file basis. We have implemented an IDE for editor libraries on top of SugarJ and the Eclipse-based Spoofax language workbench. We have validated editor libraries by evolving this IDE into a fully-fledged and schema-aware XML editor as well as an extensible Latex editor, which we used for writing this paper.
  • OOPSLA 2011 [pdf, doi, bib, researchr, ]
    Mobl is a new language designed to declaratively construct mobile web applications. Mobl integrates languages for user interface design, styling, data modeling, querying and application logic into a single, unified language that is flexible, expressive, enables early detection of errors, and has good IDE support.
  • OOPSLA 2011 [pdf, doi, bib, researchr, ]
    The reliability of compilers, interpreters, and development environments for programming languages is essential for effective software development and maintenance. They are often tested only as an afterthought. Languages with a smaller scope, such as domain-specific languages, often remain untested. General-purpose testing techniques and test case generation methods fall short in providing a low-threshold solution for test-driven language development. In this paper we introduce the notion of a language-parametric testing language (LPTL) that provides a reusable, generic basis for declaratively specifying language definition tests. We integrate the syntax, semantics, and editor services of a language under test into the LPTL for writing test inputs. This paper describes the design of an LPTL and the tool support provided for it, shows use cases using examples, and describes our implementation in the form of the Spoofax testing language.
  • SPLC 2011 [pdf, doi, bib, researchr, ]
    This paper investigates the application of domain-specific languages in product line engineering (PLE). We start by analyzing the limits of expressivity of feature models. Feature models correspond to context-free grammars without recursion, which prevents the expression of multiple instances and references. We then show how domain-specific languages (DSLs) can serve as a middle ground between feature modeling and programming. They can be used in cases where feature models are too limited, while keeping the separation between problem space and solution space provided by feature models. We then categorize useful combinations between configuration with feature model and construction with DSLs and provide an integration of DSLs into the conceptual framework of PLE. Finally we show how use of a consistent, unified formalism for models, code, and configuration can yield important benefits for managing variability and trace ability. We illustrate the concepts with several examples from industrial case studies.
  • GPCE 2011 [pdf, doi, bib, researchr, ]
    WebDSL is a domain-specific language for the implementation of dynamic web applications with a rich data model. It provides developers with object-oriented data modeling concepts but abstracts over implementation details for persisting application data in relational databases. When the underlying data model of an application evolves, persisted application data has to be migrated. While implementing migration at the database level breaks the abstractions provided by WebDSL, an implementation at the data model level requires to intermingle migration with application code. In this paper, we present a domain-specific language for the coupled evolution of data models and application data. It allows to specify data model evolution as a separate concern at the data model level and can be compiled to migration code at the database level. Its linguistic integration with WebDSL enables static checks for evolution validity and correctness.
  • OOPSLA 2011 [pdf, doi, bib, researchr, ]
    A new generation of mobile touch devices, such as the iPhone, iPad and Android devices, are equipped with powerful, modern browsers. However, regular websites are not optimized for the specific features and constraints of these devices, such as limited screen estate, unreliable Internet access, touch-based interaction patterns, and features such as GPS. While recent advances in web technology enable web developers to build web applications that take advantage of the unique properties of mobile devices, developing such applications exposes a number of problems, specifically: developers are required to use many loosely coupled languages with limited tool support and application code is often verbose and imperative. We introduce mobl, a new language designed to declaratively construct mobile web applications. Mobl integrates languages for user interface design, styling, data modeling, querying and application logic into a single, unified language that is flexible, expressive, enables early detection of errors, and has good IDE support.
  • OOPSLA 2011 [pdf, doi, bib, researchr, ]
    The Spoofax testing language provides a new approach to testing domain-specific languages as they are developed. It allows test cases to be written using fragments of the language under test, providing full IDE support for writing test cases and supporting tests for language syntax, semantics, and editor services.
  • SLE 2011 [pdf, doi, bib, researchr, ]
    Transformations and semantic analysis for source-to-source transformations such as refactorings are most effectively implemented using an abstract representation of the source code. An intrinsic limitation of transformation techniques based on abstract syntax trees is the loss of layout, i.e. comments and whitespace. This is especially relevant in the context of refactorings, which produce source code for human consumption. In this paper, we present an algorithm for fully automatic source code reconstruction for source-to-source transformations. The algorithm preserves the layout and comments of the unaffected parts and reconstructs the indentation of the affected parts, using a set of clearly defined heuristic rules to handle comments.
  • Proceedings of the Workshop on Intermediate Representations
    Florent Bouchez, Sebastian Hack, Eelco Visser (editors).
    The intermediate representation is the core of any program transformation tool. Its design has a significant impact on the simplicity, efficiency, and effectiveness of program transformations. The developments in concurrent programming, integrated development environments, and domain-specific languages pose new requirements on intermediate representations. This workshop provides a forum to discuss current trends and experiences in the design, implementation, and application of intermediate representations.
  • Programming the Mobile Web with Mobl
    Technical report TUD-SERG-2011-01, Delft University of Technology, 2011 [bib, researchr, ]
    A new generation of mobile touch devices, such as the iPhone, Android and iPad, are equipped with powerful, modern browsers. However, regular websites are not optimized for the specific features and constraints of these devices, such as limited screen estate, unreliable Internet access, touch-based interaction patterns, and features such as GPS. While recent advances in web technology enable web developers to build web applications that take advantage of the unique properties of mobile devices, developing such applications is not a clean, well-integrated experience. Developers are required to use many loosely coupled languages with limited tool support and application code is often verbose and imperative. We introduce mobl, a new language designed to declaratively construct mobile web applications. Mobl integrates languages for user interface design, data modeling and querying, scripting and web services into a single, unified language that is flexible, expressive, enables early detection of errors, and has good IDE support. We illustrate the design of the language with the implementation of ConfPlan, an application for keeping track of the schedule of conference events.
  • SCP 75(7) 2010 [pdf, doi, bib, researchr, ]
    Software written in one language often needs to construct sentences in another language, such as SQL queries, XML output, or shell command invocations. This is almost always done using unhygienic string manipulation, the concatenation of constants and client-supplied strings. A client can then supply specially crafted input that causes the constructed sentence to be interpreted in an unintended way, leading to an injection attack. We describe a more natural style of programming that yields code that is impervious to injections by construction. Our approach embeds the grammars of the guest languages (e.g. SQL) into that of the host language (e.g. Java) and automatically generates code that maps the embedded language to constructs in the host language that reconstruct the embedded sentences, adding escaping functions where appropriate. This approach is generic, meaning that it can be applied with relative ease to any combination of context-free host and guest languages.
  • IEEE Software 27(5) 2010 [pdf, doi, bib, researchr, ]
    WebDSL is a domain-specific language for Web information systems that maintains separation of concerns while integrating its sublanguages, enabling consistency checking and reusing common language concepts.
  • SoSyM 9(3) 2010 [pdf, doi, bib, researchr, ]
    The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific languages. This paper identifies techniques for improving separation of concerns in the implementation of generators. The core technique is code generation by model transformation, that is, the generation of a structured representation (model) of the target program instead of plain text. This approach enables the transformation of code after generation, which in turn enables the extension of the target language with features that allow better modularity in code generation rules. The technique can also be applied to ‘internal code generation’ for the translation of high-level extensions of a DSL to lower-level constructs within the same DSL using model-to-model transformations. This paper refines our earlier description of code generation by model transformation with an improved architecture for the composition of model-to-model normalization rules, solving the problem of combining type analysis and transformation. Instead of coarse-grained stages that alternate between normalization and type analysis, we have developed a new style of type analysis that can be integrated with normalizing transformations in a fine-grained manner. The normalization strategy has a simple extension interface and integrates non-local, context-sensitive transformation rules. We have applied the techniques in a realistic case study of domain-specific language engineering, i.e. the code generator for WebDSL, using Stratego, a high-level transformation language that integrates model-to-model, model-to-code, and code-to-code transformations.
  • ENTCS 253(7) 2010 [pdf, doi, bib, researchr, ]
    Attribute grammars are a powerful specification paradigm for many language processing tasks, particularly semantic analysis of programming languages. Recent attribute grammar systems use dynamic scheduling algorithms to evaluate attributes by need. In this paper, we show how to remove the need for a generator, by embedding a dynamic approach in a modern, object-oriented programming language to implement a small, lightweight attribute grammar library. The Kiama attribution library has similar features to current generators, including cached, uncached, circular, higher-order and parameterised attributes, and implements new techniques for dynamic extension and variation of attribute equations. We use the Scala programming language because of its combination of object-oriented and functional features, support for domain-specific notations and emphasis on scalability. Unlike generators with specialised notation, Kiama attribute grammars use standard Scala notations such as pattern-matching functions for equations and mixins for composition. A performance analysis shows that our approach is practical for realistic language processing.
  • ENTCS 253(7) 2010 [pdf, doi, bib, researchr, ]
    Modern IDEs increase developer productivity by incorporating many different kinds of editor services. These can be purely syntactic, such as syntax highlighting, code folding, and an outline for navigation; or they can be based on the language semantics, such as in-line type error reporting and resolving identifier declarations. Building all these services from scratch requires both the extensive knowledge of the sometimes complicated and highly interdependent APIs and extension mechanisms of an IDE framework, and an in-depth understanding of the structure and semantics of the targeted language. This paper describes Spoofax/IMP, a meta-tooling suite that provides high-level domain-specific languages for describing editor services, relieving editor developers from much of the framework-specific programming. Editor services are defined as composable modules of rules coupled to a modular SDF grammar. The composability provided by the SGLR parser and the declaratively defined services allows embedded languages and language extensions to be easily formulated as additional rules extending an existing language definition. The service definitions are used to generate Eclipse editor plugins. We discuss two examples: an editor plugin for WebDSL, a domain-specific language for web applications, and the embedding of WebDSL in Stratego, used for expressing the (static) semantic rules of WebDSL.
  • OOPSLA 2010 [pdf, doi, bib, researchr, ]
    Domain-specific languages (DSLs) provide high expressive power focused on a particular problem domain. They provide linguistic abstractions and specialized syntax specifically designed for a domain, allowing developers to avoid boilerplate code and low-level implementation details. Language workbenches are tools that integrate all aspects of the definition of domain-specific or general-purpose software languages and the creation of a programming environment from such a definition. To count as a language workbench, a tool needs to satisfy basic requirements for the integrated definition of syntax, semantics, and editor services, and preferably also support language extension and composition. Within these requirements there is ample room for variation in the design of a language workbench. In this tutorial, we give an introduction to the state of the art in textual DSLs and language workbenches. We discuss the main requirements and variation points in the design of language workbenches, and describe two points in the design space using two state-of-the-art language workbenches. Spoofax is an example of a parser-based language workbench, while MPS represents language workbenches based on projectional editors.
  • WRLA 2010 [pdf, doi, bib, researchr, ]
    This paper presents the main results and conclusions of the Third Rewrite Engines Competition (REC III). This edition of the competition took place as part of the 8th Workshop on Rewriting Logic and its Applications (WRLA 2010), and the systems ASF+SDF, Maude, Stratego/XT, Tom, and TXL participated in it.
  • SCAM 2010 [pdf, doi, bib, researchr, ]
    Software platforms such as the Java Virtual Machine or the CLR. NET virtual machine have their own ecosystem of a core programming language or instruction set, libraries, and developer community. Programming languages can target multiple software platforms to increase interoperability or to boost performance. Introducing a new compiler backend for a language is the first step towards targeting a new platform, translating the language to the platform's language or instruction set. Programs written in modern languages generally make extensive use of APIs, based on the runtime system of the software platform, introducing additional portability concerns. They may use APIs that are implemented by platform-specific libraries. Libraries may perform platform-specific operations, make direct native calls, or make assumptions about performance characteristics of operations or about the file system. This paper proposes to use aspect weaving to invasively adapt programs and libraries to address such portability concerns, and identifies four classes of aspects for this purpose. We evaluate this approach through a case study where we retarget the Stratego program transformation language towards the Java Virtual Machine.
  • OOPSLA 2010 [pdf, doi, bib, researchr, ]
    Spoofax is a language workbench for efficient, agile development of textual domain-specific languages with state-of-the-art IDE support. Spoofax integrates language processing techniques for parser generation, meta-programming, and IDE development into a single environment. It uses concise, declarative specifications for languages and IDE services. In this paper we describe the architecture of Spoofax and introduce idioms for high-level specifications of language semantics using rewrite rules, showing how analyses can be reused for transformations, code generation, and editor services such as error marking, reference resolving, and content completion. The implementation of these services is supported by language-parametric editor service classes that can be dynamically loaded by the Eclipse IDE, allowing new languages to be developed and used side-by-side in the same Eclipse environment.
  • OOPSLA 2010 [pdf, doi, bib, researchr, ]
    Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.
  • SLE 2010 [pdf, doi, bib, researchr, ]
    In meta-programming with concrete object syntax, meta programs can be written using the concrete syntax of manipulated programs. Quotations of concrete syntax fragments and anti-quotations for meta-level expressions and variables are used to manipulate the abstract representation of programs. These small, isolated fragments are often ambiguous and must be explicitly disambiguated with quotation tags or types, using names from the non-terminals of the object language syntax. Discoverability of these names has been an open issue, as they depend on the (grammar) implementation and are not part of the concrete syntax of a language. Based on advances in interactive development environments, we introduce interactive disambiguation to address this issue, providing real-time feedback and proposing quick fixes in case of ambiguities.
  • OOPSLA 2010 [pdf, doi, bib, researchr, ]
    Spoofax is a language workbench for efficient, agile development of textual domain-specific languages with state-of-the-art IDE support. It provides a comprehensive environment that integrates syntax definition, program transformation, code generation, and declarative specification of IDE components.
  • Generative Programming And Component Engineering, Proceedings of the Ninth International Conference on Generative Programming and Component Engineering, GPCE 2010, Eindhoven, The Netherlands, October 10-13, 2010
    Eelco Visser, Jaakko Järvi (editors).
    Welcome to the Ninth International Conference on Generative Programming and Component Engineering (GPCE’10). GPCE is a venue for researchers and practitioners interested in software components and program generation, and how these technologies can increase programmer productivity, improve software quality, and shorten the time-to-market of software products. One goal of GPCE is to foster cross-fertilization between the software engineering and the programming languages research communities. It is thus fitting that the GPCE conference co-locates with the Third International Conference on Software Language Engineering Conference (SLE’10). GPCE/SLE features jointly organized tutorial lectures, a joint keynote address, and the FOSD workshop that attract audiences of both conferences. The technical programs of the two conferences well complement each other, and are likely to be of interest to both conference attendees. This volume contains eighteen technical papers, including one tool demonstration, that were accepted to the GPCE conference this year. The program committee selected these eighteen papers out of the submitted 59 manuscripts. Each manuscript was reviewed by at least three, most often four, program committee members. Final selections were made after a week-long program committee meeting. In this volume are also included the extended abstract of the keynote presentation by Martin Erwig and the abstract of the keynote presentation by Ralf Lämmel.
  • Technical report TUD-SERG-2010-010, Software Engineering Research Group, Delft University of Technology, 2010 [pdf, doi, bib, researchr, ]
    This paper describes the workflow for performing systematic literature reviews with the researchr digital library environment.
  • ENTCS 238(3) 2009 [pdf, doi, bib, researchr, ]
    The 2nd Rewrite Engines Competition (REC) was celebrated as part of the 7th Workshop on Rewriting Logic and its Applications (WRLA 2008). In this edition of the competition participated ve systems, namely ASF+SDF, Maude, Stratego/XT, Termware, and Tom. We explain here how the competition was organized and conducted, and present its main results and conclusions.
  • SLE 2009 [pdf, doi, bib, researchr, ]
    Module systems, separate compilation, deployment of binary components, and dynamic linking have enjoyed wide acceptance in programming languages and systems. In contrast, the syntax of languages is usually defined in a non-modular way, cannot be compiled separately, cannot easily be combined with the syntax of other languages, and cannot be deployed as a component for later composition. Grammar formalisms that do support modules use whole program compilation. Current extensible compilers focus on source-level extensibility, which requires users to compile the compiler with a specific configuration of extensions. A compound parser needs to be generated for every combination of extensions. The generation of parse tables is expensive, which is a particular problem when the composition configuration is not fixed to enable users to choose language extensions. In this paper we introduce an algorithm for parse table composition to support separate compilation of grammars to parse table components. Parse table components can be composed (linked) efficiently at runtime, i.e. just before parsing. While the worst-case time complexity of parse table composition is exponential (like the complexity of parse table generation itself), for realistic language combination scenarios involving grammars for real languages, our parse table composition algorithm is an order of magnitude faster than computation of the parse table for the combined grammars.
  • OOPSLA 2009 [pdf, doi, bib, researchr, ]
    Integrated development environments (IDEs) increase programmer productivity, providing rapid, interactive feedback based on the syntax and semantics of a language. A heavy burden lies on developers of new languages to provide adequate IDE support. Code generation techniques provide a viable, efficient approach to semi-automatically produce IDE plugins. Key components for the realization of plugins are the language's grammar and parser. For embedded languages and language extensions, constituent IDE plugin modules and their grammars can be combined. Unlike conventional parsing algorithms, scannerless generalized-LR parsing supports the full set of context-free grammars, which is closed under composition, and hence can parse language embeddings and extensions composed from separate grammar modules. To apply this algorithm in an interactive environment, this paper introduces a novel error recovery mechanism, which allows it to be used with files with syntax errors -- common in interactive editing. Error recovery is vital for providing rapid feedback in case of syntax errors, as most IDE services depend on the parser -- from syntax highlighting to semantic analysis and cross-referencing. We base our approach on the principles of island grammars, and derive permissive grammars with error recovery productions from normal SDF grammars. To cope with the added complexity of these grammars, we adapt the parser to support backtracking. We evaluate the recovery quality and performance of our approach using a set of composed languages, based on Java and Stratego.
  • CLOUD 2009 [pdf, doi, bib, researchr, ]
    Hospital environments are currently primarily device-oriented: software services are installed, often manually, on specific devices. For instance, an application to view MRI scans may only be available on a limited number of workstations. The medical world is changing to a service-oriented environment, which means that every software service should be available on every device. However, these devices have widely varying capabilities, ranging from powerful workstations to PDAs, and high-bandwidth local machines to low-bandwidth remote machines. To support running applications in such an environment, we need to treat the hospital machines as a cloud, where components of the application are automatically deployed to machines in the cloud with the required capabilities and connectivity. In this paper, we suggest an architecture for applications in such a cloud, in which components are reliably and automatically deployed on the basis of a declarative model of the application using the Nix package manager.
  • SLE 2009 [pdf, doi, bib, researchr, ]
    Data validation rules constitute the constraints that data input and processing must adhere to in addition to the structural constraints imposed by a data model. Web modeling tools do not address data validation concerns explicitly, hampering full code generation and model expressivity. Web application frameworks do not offer a consistent interface for data validation. In this paper, we present a solution for the integration of declarative data validation rules with user interface models in the domain of web applications, unifying syntax, mechanisms for error handling, and semantics of validation checks, and covering value well-formedness, data invariants, input assertions, and action assertions. We have implemented the approach in WebDSL, a domain-specific language for the definition of web applications.
  • OOPSLA 2009 [pdf, doi, bib, researchr, ]
    WebDSL is a domain-specific language for the development of web applications that integrates data-models, user-interface models, actions, validation, access control, and workflow. The compiler verifies the consistency of applications and generates complete implementations in Java or Python. We illustrate the key concepts of the language with a small web application.
  • SLE 2009 [pdf, doi, bib, researchr, ]
    Parser generators are an indispensable tool for rapid language development. However, they often fall short of the finesse of a hand-crafted parser, built with the language semantics in mind. One area where generated parsers have provided unsatisfactory results is that of error recovery. Good error recovery is both natural, giving recovery suggestions in line with the intention of the programmer; and flexible, allowing it to be adapted according to language insights and language changes. This paper describes a novel approach to error recovery, taking into account not only the context-free grammar, but also indentation usage. We base our approach on an extension of the SGLR parser that supports fine-grained error recovery rules and can be used to parse complex, composed languages. We take a divide-and-conquer approach to error recovery: using indentation, erroneous regions of code are identified. These regions constrain the search space for applying recovery rules, improving performance and ensuring recovery suggestions local to the error. As a last resort, erroneous regions can be discarded. Our approach also integrates bridge parsing to provide more accurate suggestions for indentation-sensitive language constructs such as scopes. We evaluate our approach by comparison with the JDT Java parser used in Eclipse.
  • SLE 2009 [pdf, doi, bib, researchr, ]
    Intermediate languages are used in compiler construction to simplify retargeting compilers to multiple machine architectures. In the implementation of domain-specific languages (DSLs), compilers typically generate high-level source code, rather than low-level machine instructions. DSL compilers target a software platform, i.e. a programming language with a set of libraries, deployable on one or more operating systems. DSLs enable targeting multiple software platforms if its abstractions are platform independent. While transformations from DSL to each targeted platform are often conceptually very similar, there is little reuse between transformations due to syntactic and API differences of the target platforms, making supporting multiple platforms expensive. In this paper, we discuss the design and implementation of PIL, a Platform Independent Language, an intermediate language providing a layer of abstraction between DSL and target platform code, abstracting from syntactic and API differences between platforms, thereby removing the need for platform-specific transformations. We discuss the use of PIL in an implemementation of WebDSL, a DSL for building web applications.
  • CC 2009 [pdf, doi, bib, researchr, ]
    Attribute grammars are a powerful specification formalism for tree-based computation, particularly for software language processing. Various extensions have been proposed to abstract over common patterns in attribute grammar specifications. These include various forms of copy rules to support non-local dependencies, collection attributes, and expressing dependencies that are evaluated to a fixed point. Rather than implementing extensions natively in an attribute evaluator, we propose attribute decorators that describe an abstract evaluation mechanism for attributes, making it possible to provide such extensions as part of a library of decorators. Inspired by strategic programming, decorators are specified using generic traversal operators. To demonstrate their effectiveness, we describe how to employ decorators in name, type, and flow analysis.
  • ENTCS 203(2) 2008 [pdf, doi, bib, researchr, ]
    Program transformation systems provide powerful analysis and transformation frameworks as well as concise languages for language processing, but instantiating them for every subject language is an arduous task, most often resulting in half-completed frontends. Compilers provide mature frontends with robust parsers and type checkers, but solving language processing problems in general-purpose languages without transformation libraries is tedious. Reusing these frontends with existing transformation systems is therefore attractive. However, for this reuse to be optimal, the functional logic found in the frontend should be exposed to the transformation system – simple data serialization of the abstract syntax tree is not enough, since this fails to expose important compiler functionality, such as import graphs, symbol tables and the type checker. In this paper, we introduce a novel and general technique for combining term-based transformation systems with existing language frontends. The technique is presented in the context of a scriptable analysis and transformation framework for Java built on top of the Eclipse Java compiler. The framework consists of an adapter automatically extracted from the abstract syntax tree of the compiler and an interpreter for the Stratego program transformation language. The adapter allows the Stratego interpreter to rewrite directly on the compiler AST. We illustrate the applicability of our system with scripts written in Stratego that perform framework and library-specific analyses and transformations.
  • ENTCS 203(2) 2008 [pdf, doi, bib, researchr, ]
    A wide range of parser generators are used to generate parsers for programming languages. The grammar formalisms that come with parser generators provide different approaches for defining operator precedence. Some generators (e.g. YACC) support precedence declarations, others require the grammar to be unambiguous, thus encoding the precedence rules. Even if the grammar formalism provides precedence rules, a particular grammar might not use it. The result is grammar variants implementing the same language. For the C language, the GNU Compiler uses YACC with precedence rules, the C-Transformers uses SDF without priorities, while the SDF library does use priorities. For PHP, Zend uses YACC with precedence rules, whereas PHP-front uses SDF with priority and associativity declarations. The variance between grammars raises the question if the precedence rules of one grammar are compatible with those of another. This is usually not obvious, since some languages have complex precedence rules. Also, for some parser generators the semantics of precedence rules is defined operationally, which makes it hard to reason about their effect on the defined language. We present a method and tool for comparing the precedence rules of different grammars and parser generators. Although it is undecidable whether two grammars define the same language, this tool provides support for comparing and recovering precedence rules, which is especially useful for reliable migration of a grammar from one grammar formalism to another. We evaluate our method by the application to non-trivial mainstream programming languages, such as PHP and C.
  • SCP 72(1-2) 2008 [pdf, doi, bib, researchr, ]
    Stratego/XT is a language and toolset for program transformation. The Stratego language provides rewrite rules for expressing basic transformations, programmable rewriting strategies for controlling the application of rules, concrete syntax for expressing the patterns of rules in the syntax of the object language, and dynamic rewrite rules for expressing context-sensitive transformations, thus supporting the development of transformation components at a high level of abstraction. The XT toolset offers a collection of flexible, reusable transformation components, and tools for generating such components from declarative specifications. Complete program transformation systems are composed from these components.
  • ICMT 2008 [pdf, doi, bib, researchr, ]
    The realization of model-driven software development requires effective techniques for implementing code generators. In this paper, we present a case study of code generation by model transformation with Stratego, a high-level transformation language based on the paradigm of rewrite rules with programmable strategies that integrates model-to-model, model-to-code, and code-to-code transformations. The use of concrete object syntax guarantees syntactic correctness of code patterns, and enables the subsequent transformation of generated code. The composability of strategies supports two dimensions of transformation modularity. Vertical modularity is achieved by designing a generator as a pipeline of model-to-model transformations that gradually transforms a high-level input model to an implementation. Horizontal modularity is achieved by supporting the definition of plugins which implement all aspects of a language feature. We discuss the application of these techniques in the implementation of WebDSL, a domain-specific language for dynamic web applications with a rich data model.
  • MoDELS 2008 [pdf, doi, bib, researchr, ]
    Workflow languages are designed for the high-level description of processes and are typically not suitable for the generation of complete applications. In this paper, we present WebWorkFlow, an object-oriented workflow modeling language for the high-level description of workflows in web applications. Workflow descriptions define procedures operating on domain objects. Procedures are composed using sequential and concurrent process combinators. WebWorkFlow is an embedded language, extending WebDSL, a domain-specific language for web application development, with workflow abstractions. The extension is implemented by means of model-to-model transformations. Rather than providing an exclusive workflow language, WebWorkFlow supports interaction with the underlying WebDSL language. WebWorkFlow supports most of the basic workflow control patterns.
  • OOPSLA 2008 [pdf, doi, bib, researchr, ]
    Language extensions increase programmer productivity by providing concise, often domain-specific syntax, and support for static verification of correctness, security, and style constraints. Language extensions can often be realized through translation to the base language, supported by preprocessors and extensible compilers. However, various kinds of extensions require further adaptation of a base compiler's internal stages and components, for example to support separate compilation or to make use of low-level primitives of the platform (e.g., jump instructions or unbalanced synchronization). To allow for a more loosely coupled approach, we propose an open compiler model based on normalization steps from a high-level language to a subset of it, the core language. We developed such a compiler for a mixed Java and (core) bytecode language, and evaluate its effectiveness for composition mechanisms such as traits, as well as statement-level and expression-level language extensions.
  • MoDELS 2008 [pdf, doi, bib, researchr, ]
    As most software artifacts, meta-models can evolve. Their evolution requires conforming models to co-evolve along with them. Coupled evolution supports this. Its applicability is not limited to the modeling domain. Other domains are for example evolving grammars or database schemas. Existing approaches to coupled evolution focus on a single, homogeneous domain. They solve the co-evolution problems locally and repeatedly. In this paper we present a systematic, heterogeneous approach to coupled evolution. It provides an automatically derived domain specific transformation language; a means of executing transformations at the top level; a derivation of the coupled bottom level transformation; and it allows for generic abstractions from elementary transformations. The feasibility of the architecture is evaluated by applying it to data model evolution.
  • WCRE 2008 [doi, bib, researchr, ]
    Domain-specific languages (DSLs) improve programmer productivity by providing high-level abstractions for the development of applications in a particular domain. However,the smaller distance to the application domain entails more frequent changes to the language. As a result, existing DSL models need to be converted to the new version. Manual conversion is tedious and error prone.This paper presents an approach to support DSL evolution by generation of convertors between DSLs. By analyzing the differences between DSL meta-models, a mapping is reverse engineered which can be used to generate reengineering tools to automatically convert models between different versions of a DSL. The approach has been implemented for the Microsoft DSL Tools infrastructure in two tools called DSLCompare and ConverterGenerator. The approach has been evaluated by means of three case studies taken from the software development practice at the company Avanade.
  • LDTA 2008 [pdf, bib, researchr, ]
    Integrated Development Environments (IDEs) increase productivity by providing a rich user interface and rapid feedback for a specific language. Creating an editor for a specific language is not a trivial undertaking, and is a cumbersome task even when working with an extensible framework such as Eclipse. A new IBM-guided effort, the IMP framework, relieves the IDE developer from a significant portion of the required work by providing various abstractions for this. For embedded languages, such as embedded regular expressions, SQL queries, or code generation templates, its LALR parser generator falls short, however. Scannerless parsing with SGLR enables concise, modular definition of such languages. In this paper, we present an integration of SGLR into IMP, demonstrating that a scannerless parser can be successfully integrated into an IDE. Given an SDF syntax definition, the sdf2imp tool automatically generates an editor plugin based on the IMP API, complete with syntax checking, syntax highlighting, outline view, and code folding. Using declarative domain-specific languages, these services can be customized, and using the IMP metatooling framework it can be extended with other features.
  • OOPSLA 2008 [pdf, doi, bib, researchr, ]
    WebDSL is a domain-specific language for the implementation of dynamic web applications with a rich datamodel. It consists of a core language with constructs to define entities, pages and business logic. Higher-level abstractions, modeling access control and workflow, are defined in a modular fashion as extensions of the core language.
  • DSM 2008 [pdf, bib, researchr, ]
    Application frameworks encapsulate domain knowledge in a reusable library, providing abstractions for a particular domain. As such, they can form the basis for domain-specific languages, which may offer notational constructs, static analysis, and optimizations specific for the domain. Additional abstractions can be incrementally added on top of a domain-specific, following an inductive approach towards its design, evolving the language as new domain insights are acquired. A problem arises when such additions do not align well with the underlying framework. In this paper, we provide different examples of this problem and describe scenarios of dealing with it.
  • ICWE 2008 [pdf, doi, bib, researchr, ]
    In this paper, we present the extension of WebDSL, a domain-specific language for web application development, with abstractions for declarative definition of access control. The extension supports the definition of a wide range of access control policies concisely and transparently as a separate concern. In addition to regulating the access to pages and actions, access control rules are used to infer navigation options not accessible to the current user, preventing the presentation of inaccessible links. The extension is an illustration of a general approach to the design of domain-specific languages for different technical domains to support separation of concerns in application development, while preserving linguistic integration. This approach is realized by means of a transformational semantics that weaves separately defined aspects into an integrated implementation.
  • GTTSE 2007 [pdf, doi, bib, researchr, ]
    The goal of domain-specific languages (DSLs) is to increase the productivity of software engineers by abstracting from low-level boil- erplate code. Introduction of DSLs in the software development process requires a smooth workflow for the production of DSLs themselves. This requires technology for designing and implementing DSLs, but also a methodology for using that technology. That is, a collection of guidelines, design patterns, and reusable DSL components that show developers how to tackle common language design and implementation issues. This paper presents a case study in domain-specific language engineering. It reports on a pro ject in which the author designed and built WebDSL, a DSL for web applications with a rich data model, using several DSLs for DSL engineering: SDF for syntax definition and Stratego/XT for code gener- ation. The paper follows the stages in the development of the DSL. The contributions of the paper are three-fold. (1) A tutorial in the application of the specific SDF and Stratego/XT technology for building DSLs. (2) A description of an incremental DSL development process. (3) A domain- specific language for web-applications with rich data models. The paper concludes with a survey of related approaches.
  • Proceedings 1st International Workshop on Model-Driven Software Evolution 2007 [pdf, bib, researchr]
  • GPCE 2007 [pdf, doi, bib, researchr, ]
    Software written in one language often needs to construct sentences in another language, such as SQL queries, XML output, or shell command invocations. This is almost always done using unhygienic string manipulation, the concatenation of constants and client-supplied strings. A client can then supply specially crafted input that causes the constructed sentence to be interpreted in an unintended way, leading to an injection attack. We describe a more natural style of programming that yields code that is impervious to injections by construction. Our approach embeds the grammars of the guest languages (e.g., SQL) into that of the host language (e.g., Java) and automatically generates code that maps the embedded language to constructs in the host language that reconstruct the embedded sentences, adding escaping functions where appropriate. This approach is generic, meaning that it can be applied with relative ease to any combination of host and guest languages.
  • MoDELS 2007 [pdf, doi, bib, researchr, ]
    Language libraries extend regular libraries with domain-specific notation. More precisely, a language library is a combination of a domain-specific language embedded in the general-purpose host language, a regular library implementing the underlying functionality, and an assimilation transformation that maps embedded DSL fragments to host language code. While the basic architecture for realizing language libraries is the same for all applications, there are many design choices to be made in the design of a particular combination of library, guest language syntax, host language, and assimilation. In this paper, we give an overview of the design space for syntax embeddings and assimilations for the realization of language libraries.
  • Proceedings of the Seventh Workshop on Language Descriptions, Tools and Applications (LDTA 2007) 2007 [pdf, bib, researchr]
  • Proceedings of the 2007 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-based Program Manipulation, 2007, Nice, France, January 15-16, 2007
    Ganesan Ramalingam, Eelco Visser (editors).
  • FUIN 69(1-2) 2006 [pdf, doi, bib, researchr, ]
    The applicability of term rewriting to program transformation is limited by the lack of control over rule application and by the context-free nature of rewrite rules. The first problem is addressed by languages supporting user-definable rewriting strategies. The second problem is addressed by the extension of rewriting strategies with scoped dynamic rewrite rules. Dynamic rules are defined at run-time and can access variables available from their definition context. Rules defined within a rule scope are automatically retracted at the end of that scope. In this paper, we explore the design space of dynamic rules, and their application to transformation problems. The technique is formally defined by extending the operational semantics underlying the program transformation language Stratego, and illustrated by means of several program transformations in Stratego, including constant propagation, bound variable renaming, dead code elimination, function inlining, and function specialization.
  • ENTCS 147(1) 2006 [pdf, doi, bib, researchr, ]
    Properties such as logging, persistence, debugging, tracing, distribution, performance monitoring and exception handling occur in most programming paradigms and are normally very difficult or even impossible to modularize with traditional modularization mechanisms because they are cross-cutting. Recently, aspect-oriented programming has enjoyed recognition as a practical solution for separating these concerns. In this paper we describe an extension to the Stratego term rewriting language for capturing such properties. We show our aspect language offers a concise, practical and adaptable solution for dealing with unanticipated algorithm extension for forward data-flow propagation and dynamic type checking of terms. We briefly discuss some of the challenges faced when designing and implementing an aspect extension for and in a rule-based term rewriting system.
  • GTTSE 2006 [pdf, doi, bib, researchr, ]
    General-purpose programming languages provide limited facilities for expressing domain-specific concepts in a natural manner. All domain concepts need to be captured using the same generic syntactic and semantic constructs. Generative programming methods and program transformation techniques can be used to overcome this lack of abstraction in general-purpose languages. In this tutorial we describe the MetaBorg method for embedding domain-specific languages, tailored syntactically and semantically to the application domain at hand, in a general-purpose language. MetaBorg is based on Stratego/XT, a language and toolset for the implementation of program transformation systems, which is used for the definition of syntactic embeddings and assimilation of the embedded constructs into the surrounding code. We illustrate MetaBorg with three examples. JavaSwul is a custom designed language for implementing graphical user-interfaces, which provides high-level abstractions for component composition and event-handling. JavaRegex is a new embedding of regular expression matching and string rewriting. JavaJava is an embedding of Java in Java for generating Java code. For these cases we show how Java programs in these domains become dramatically more readable, and we give an impression of the implementation of the language embeddings.
  • PEPM 2006 [pdf, doi, bib, researchr, ]
    Stratego/XT is a language and toolset for program transformation. The Stratego language provides rewrite rules for expressing basic transformations, programmable rewriting strategies for controlling the application of rules, concrete syntax for expressing the patterns of rules in the syntax of the object language, and dynamic rewrite rules for expressing context-sensitive transformations, thus supporting the development of transformation components at a high level of abstraction. The XT toolset offers a collection of flexible, reusable transformation components, as well as declarative languages for deriving new components. Complete program transformation systems are composed from these components. In this paper we give an overview of Stratego/XT 0.16.
  • OOPSLA 2006 [pdf, doi, bib, researchr, ]
    Aspect-Oriented Programming (AOP) is attracting attention from both research and industry, as illustrated by the ever-growing popularity of AspectJ, the de facto standard AOP extension of Java. From a compiler construction perspective AspectJ is interesting as it is a typical example of compositional language, ie a language composed of a number of separate languages with different syntactical styles: in addition to plain Java, AspectJ includes a language for defining pointcuts and one for defining advices. Language composition represents a non-trivial challenge for conventional parsing techniques. First, combining several languages with different lexical syntax leads to considerable complexity in the lexical states to processed. Second, as new language features for AOP are being explored, many research proposals are concerned with further extending the AspectJ language, resulting in a need for an extensible syntax definition.This paper shows how scannerless parsing elegantly addresses the issues encountered by conventional techniques when parsing AspectJ . We present the design of a modular, extensible, and formal definition of the lexical and context-free aspects of the AspectJ syntax in the Syntax Definition Formalism SDF, which is implemented by a scannerless, generalized-LR parser (SGLR). We introduce grammar mixins as a novel application of SDF's modularity features, which allows the declarative definition of different keyword policies and combination of extensions. We illustrate the modular extensibility of our definition with syntax extensions taken from current research on aspect languages. Finally, benchmarks show the reasonable performance of scannerless generalized-LR parsing for this grammar.
  • JSC 40(1) 2005 [pdf, doi, bib, researchr, ]
    Program transformation is the mechanical manipulation of a program in order to improve it relative to some cost function and is understood broadly as the domain of computation where programs are the data. The natural basic building blocks of the domain of program transformation are transformation rules expressing a ?one-step? transformation on a fragment of a program. The ultimate perspective of research in this area is a high-level, language parametric, rule-based program transformation system, which supports a wide range of transformations, admitting efficient implementations that scale to large programs. This situation has not yet been reached, as trade-offs between different goals need to be made. This survey gives an overview of issues in rule-based program transformation systems, focusing on the expressivity of rule-based program transformation systems and in particular on transformation strategies available in various approaches. The survey covers term rewriting, extensions of basic term rewriting, tree parsing strategies, systems with programmable strategies, traversal strategies, and context-sensitive rules.
  • GPCE 2005 [pdf, doi, bib, researchr, ]
    In meta programming with concrete object syntax, object-level programs are composed from fragments written in concrete syntax. The use of small program fragments in such quotations and the use of meta-level expressions within these fragments (anti-quotation) often leads to ambiguities. This problem is usually solved through explicit disambiguation, resulting in considerable syntactic overhead. A few systems manage to reduce this overhead by using type information during parsing. Since this is hard to achieve with traditional parsing technology, these systems provide specific combinations of meta and object languages, and their implementations are difficult to reuse. In this paper, we generalize these approaches and present a language independent method for introducing concrete object syntax without explicit disambiguation. The method uses scannerless generalized-LR parsing to parse meta programs with embedded object-level fragments, which produces a forest of all possible parses. This forest is reduced to a tree by a disambiguating type checker for the meta language. To validate our method we have developed embeddings of several object languages in Java, including AspectJ and Java itself.
  • CC 2005 [pdf, doi, bib, researchr, ]
    Data-flow transformations used in optimizing compilers are also useful in other programming tools such as code generators, aspect weavers, domain-specific optimizers, and refactoring tools. These applications require source-to-source transformations rather than transformations on a low-level intermediate representation. In this paper we describe the composition of source-to-source data-flow transformations in the program transformation language Stratego. The language supports the high-level specification of transformations by means of rewriting strategy combinators that allow a natural modeling of data- and control-flow without committing to a specific source language. Data-flow facts are propagated using dynamic rewriting rules. In particular, we introduce the concept of dependent dynamic rewrite rules for modeling the dependencies of data-flow facts on program entities such as variables. The approach supports the combination of analysis and transformation, the combination of multiple transformations, the combination with other types of transformations, and the correct treatment of variable binding constructs and lexical scope to avoid free variable capture.
  • SCM 2005 [pdf, doi, bib, researchr, ]
    The deployment of services --- sets of running programs that provide some useful facility on a system or network --- is typically implemented through a manual, time-consuming and error-prone process. For instance, system administrators must deploy the necessary software components, edit configuration files, start or stop processes, and so on. This is often done in an ad hoc style with no reproducibility, violating proper configuration management practices. In this paper we show that build management, software deployment and service deployment can be integrated into a single formalism. We do this in the context of the Nix software deployment system, and show that its advantages --- co-existence of versions and variants, atomic upgrades and rollbacks, and component closure --- extend naturally to service deployment. The approach also elegantly extends to distributed services. In addition, we show that the Nix expression language can simplify the implementation of crosscutting variation points in services.
  • SCAM 2005 [pdf, doi, bib, researchr, ]
    The transformation language Stratego provides high-level abstractions for implementation of a wide range of transformations. Our aim is to integrate transformation in the software development process and make it available to programmers. This requires the transformations provided by the programming environment to be extensible. This paper presents a case study in the implementation of extensible programming environments using Stratego, by developing a small collection of language extensions and several typical transformations for these languages.
  • Technical report UU-CS-2005-030, Department of Information and Computing Sciences. Universiteit Utrecht, 2005 [pdf, bib, researchr, ]
    Properties such as logging, persistence, debugging, tracing, distribution, performance monitoring and exception handling occur in most programming paradigms and are normally very difficult or even impossible to modularize with traditional modularization mechanisms because they are cross-cutting. Recently, aspect-oriented programming has enjoyed recognition as a practical solution for separating these concerns. In this paper we describe an extension to the Stratego term rewriting language for capturing such properties. We show our aspect language offers a concise, practical and adaptable solution for dealing with unanticipated algorithm extension for forward data-flow propagation and dynamic type checking of terms. We briefly discuss some of the challenges faced when designing and implementing an aspect extension for and in a rule-based term rewriting system.
  • Informatie 46(1) 2004 [pdf, bib, researchr, ]
    Elk eerbiedwaardig softwaresysteem biedt tegenwoordig de mogelijkheid om gedurende de levensduur de eigenschappen ervan te wijzigen. Om deze variatiepunten te modelleren wordt in het Jaquard-project TraCE gewerkt aan transparante configuratieomgevingen.
  • OOPSLA 2004 [pdf, doi, bib, researchr, ]
    Application programmer's interfaces give access to domain knowledge encapsulated in class libraries without providing the appropriate notation for expressing domain composition. Since object-oriented languages are designed for extensibility and reuse, the language constructs are often sufficient for expressing domain abstractions at the semantic level. However, they do not provide the right abstractions at the syntactic level. In this paper we describe MetaBorg, a method for providing concrete syntax for domain abstractions to application programmers. The method consists of embedding domain-specific languages in a general purpose host language and assimilating the embedded domain code into the surrounding host code. Instead of extending the implementation of the host language, the assimilation phase implements domain abstractions in terms of existing APIs leaving the host language undisturbed. Indeed, MetaBorg can be considered a method for promoting APIs to the language level. The method is supported by proven and available technology, i.e. the syntax definition formalism SDF and the program transformation language and toolset Stratego/XT. We illustrate the method with applications in three domains: code generation, XML generation, and user-interface construction.
  • lisa 2004 [pdf, doi, bib, researchr, ]
    Existing systems for software deployment are neither safe nor sufficiently flexible. Primary safety issues are the inability to enforce reliable specification of component dependencies, and the lack of support for multiple versions or variants of a component. This renders deployment operations such as upgrading or deleting components dangerous and unpredictable. A deployment system must also be flexible (i.e., policy-free) enough to support both centralised and local package management, and to allow a variety of mechanisms for transferring components. In this paper we present Nix, a deployment system that addresses these issues through a simple technique of using cryptographic hashes to compute unique paths for component instances.
  • ICSE 2004 [pdf, doi, bib, researchr, ]
    The deployment of software components frequently fails because dependencies on other components are not declared explicitly or are declared imprecisely. This results in an incomplete reproduction of the environment necessary for proper operation, or in interference between incompatible variants. In this paper we show that these deployment hazards are similar to pointer hazards in memory models of programming languages and can be countered by imposing a memory management discipline on software deployment. Based on this analysis we have developed a generic, platform and language independent, discipline for deployment that allows precise dependency verification; exact identification of component variants; computation of complete closures containing all components on which a component depends; maximal sharing of components between such closures; and concurrent installation of revisions and variants of components. We have implemented the approach in the Nix deployment system, and used it for the deployment of a large number of existing Linux packages. We compare its effectiveness to other deployment systems.
  • Generative Programming and Component Engineering: Third International Conference, GPCE 2004, Vancouver, Canada, October 24-28, 2004. Proceedings
    Gabor Karsai, Eelco Visser (editors).
  • Dagstuhl 2003 [pdf, doi, bib, researchr, ]
    Stratego/XT is a framework for the development of transformation systems aiming to support a wide range of program transformations. The framework consists of the transformation language Stratego and the XT collection of transformation tools. Stratego is based on the paradigm of rewriting under the control of programmable rewriting strategies. The XT tools provide facilities for the infrastructure of transformation systems including parsing and pretty-printing. The framework addresses the entire range of the development process; from the specification of transformations to their composition into transformation systems. This chapter gives an overview of the main ingredients involved in the composition of transformation systems with Stratego/XT, where we distinguish the abstraction levels of rules, strategies, tools, and systems.
  • AOSD 2003 [pdf, doi, bib, researchr, ]
    Strategic programming is a generic programming idiom for processing compound data such as terms or object structures. At the heart of the approach is the separation of two concerns: basic dataprocessing computations vs. traversal schemes. Actual traversals are composed by passing the former as arguments to the latter. Traversal schemes can be defined by the strategic programmer using a combinator style that relies on primitives for layered traversal.In this paper, we take a look at strategic programming from an aspect-oriented programming perspective. Throughout the paper, we compare strategic programming with adaptive programming, which is a well-established aspectual approach to the traversal of object structures. We start from the observation that aspect-oriented programming terms, e.g., crosscutting, join point, and advice can be instantiated for aspectual traversal approaches.
  • SCAM 2003 [pdf, doi, bib, researchr, ]
    The use of a high-level, abstract coding style can greatly increase developer productivity. For numerical software, this can result in drastically reduced run-time performance. High-level, domain-specific optimisations can eliminate much of the overhead caused by an abstract coding style, but current compilers have poor support for domain-specific optimisation. In this paper we present CodeBoost, a source-to-source transformation tool for domain-specific optimisation of C++ programs. CodeBoost performs parsing, semantic analysis and pretty-printing, and transformations can be implemented either in the Stratego program transformation language, or as user-defined rewrite rules embedded within the C++ program. CodeBoost has been used with great success to optimise numerical applications written in the Sophus high-level coding style. We discuss the overall design of the CodeBoost transformation framework, and take a closer look at two important features of CodeBoost: user-defined rules and totem annotations. We also show briefly how CodeBoost is used to optimise Sophus code, resulting in applications that run twice as fast, or more.
  • SCAM 2003 [pdf, doi, bib, researchr, ]
    Array processing languages such as APL, Matlab and Octave rely on dynamic typechecking by the interpreter rather than static typechecking and are designed for user convenience with a syntax close to mathematical notation. Functions and operators are highly overloaded. The price to be paid for this flexibility is computational performance, since the run-time system is responsible for type checking, array shape determination, function call dispatching, and handling possible run-time errors. In order to produce effecient code, an Octave compiler should address those issues at compile-time as much as possible. In particular, static type and shape inferencing can improve the quality of the generated code. In this paper we discuss how overloading in dynamically typed Octave programs can be resolved by program specialization. We discuss the typing issues in compilation of Octave programs and give an overview of the implementation of the specializer in the transformation language Stratego.
  • Dagstuhl 2003 [pdf, doi, bib, researchr, ]
    AUTOBAYES is a fully automatic, schema-based program synthesis system for statistical data analysis applications. Its core component is a schema library, i.e., a collection of generic code templates with associated applicability constraints which are instantiated in a problem-specific way during synthesis. Currently, AUTOBAYE S is implemented in Prolog; the schemas thus use abstract syntax (i.e., Prolog terms) to formulate the templates. However, the conceptual distance between this abstract representation and the concrete syntax of the generated programs makes the schemas hard to create and maintain. In this paper we describe how AUTOBAYE S is retrofitted with concrete syn- tax. We show how it is integrated into Prolog and describe how the seamless interaction of concrete syntax fragments with AUTOBAYE S’s remaining “legacy” meta-programming kernel based on abstract syntax is achieved. We apply the approach to gradually migrate individual schemas without forcing a disruptive migration of the entire system to a different meta-programming language. First experiences show that a smooth migration can be achieved. Moreover, it can re- sult in a considerable reduction of the code size and improved readability of the code. In particular, abstracting out fresh-variable generation and second-order term construction allows the formulation of larger continuous fragments.
  • LOPSTR 2003 [pdf, doi, bib, researchr, ]
    Program generation and transformation systems work on two language levels, the object-level (i.e., the language of the manipulated programs), and the meta-level (i.e., the implementation language of the system itself). The meta-level representations of object-level program fragments are usually built in an essentially syntax-free fashion using the operations provided by the meta-language. However, syntax matters and a large conceptual distance between the two languages makes it difficult to maintain and extend such systems. Here we describe how an existing Prolog-based system can gradually be retrofitted with concrete object-level syntax using the approach outlined in [5], thus shrinking this distance.
  • Technical report UU-CS-2003-048, Institute of Information and Computing Sciences, Utrecht University, 2003 [pdf, bib, researchr, ]
    Transformation techniques are spreading from application in compilers to general use in generative programming and document processing. Since transformation requires operations such as pattern matching, generic structure traversal, and querying, which are not normally provided by general-purpose programming languages, many tools have been developed to provide higher-level support for the implementation of transformations. These tools come in many flavors each with their own merits and based on different paradigms, which makes comparison difficult. In this paper, we consider transformation from the point of view of mechanics and develop a classification of transformation mechanisms that provides a reference for comparing tools developed for different applications, using different implementations, and in different programming paradigms. To do so we distinguish three fundamental aspects of transformation mechanisms: scope, direction, and stages. We apply this classification in a discussion of design patterns for transformation, characterization of several typical transformations, and a systematic comparison of eleven representative transformation tools.
  • ENTCS 70(6) 2002 [pdf, doi, bib, researchr, ]
    Data-flow optimizations are usually implemented on low-level intermediate repre- sentations. This is not appropriate for source-to-source optimizations, which re- construct a source level program after transformation. In this paper we show how constant propagation, a well known data-flow optimization problem, can be imple- mented on abstract syntax trees in Stratego, a rewriting system extended with programmable rewriting strategies for the control over the application of rules and dynamic rewrite rules for the propagation of information.
  • ENTCS 65(3) 2002 [pdf, doi, bib, researchr, ]
    Programming language semantics based on pure rewrite rules suffers from the gap between the rewriting strategy implemented in rewriting engines and the intended evaluation strategy. This paper shows how programmable rewriting strategies can be used to implement interpreters for programming languages based on rewrite rules. The advantage of this approach is that reduction rules are first class entities that can be reused in different strategies, even in other kinds of program transfor- mations such as optimizers. The approach is illustrated with several interpreters for the lambda calculus based on implicit and explicit (parallel) substitution, different strategies including normalization, eager evaluation, lazy evaluation, and lazy eval- uation with updates. An extension with pattern matching and choice shows that such interpreters can easily be extended.
  • GPCE 2002 [pdf, doi, bib, researchr, ]
    Meta programs manipulate structured representations, i.e., abstract syntax trees, of programs. The conceptual distance between the concrete syntax meta-programmers use to reason about programs and the notation for abstract syntax manipulation provided by general purpose (meta-) programming languages is too great for many applications. In this paper it is shown how the syntax definition formalism SDF can be employed to fit any meta-programming language with concrete syntax notation for composing and analyzing object programs. As a case study, the addition of concrete syntax to the program transformation language Stratego is presented. The approach is then generalized to arbitrary meta-languages.
  • RTA 2002 [pdf, doi, bib, researchr, ]
    Instruction selection (mapping IR trees to machine instructions) can be expressed by means of rewrite rules. Typically, such sets of rewrite rules are highly ambiguous. Therefore, standard rewriting engines based on fixed, exhaustive strategies are not appropriate for the execution of instruction selection. Code generator generators use special purpose implementations employing dynamic programming. In this paper we show how rewriting strategies for instruction selection can be encoded concisely in Stratego, a language for program transformation based on the paradigm of programmable rewriting strategies. This embedding obviates the need for a language dedicated to code generation, and makes it easy to combine code generation with other optimizations.
  • CC 2002 [pdf, doi, bib, researchr, ]
    In this paper we present the fusion of generalized LR parsing and scannerless parsing. This combination supports syntax definitions in which all aspects (lexical and context-free) of the syntax of a language are defined explicitly in one formalism. Furthermore, there are no restrictions on the class of grammars, thus allowing a natural syntax tree structure. Ambiguities that arise through the use of unrestricted grammars are handled by explicit disambiguation constructs, instead of implicit defaults that are taken by traditional scanner and parser generators. Hence, a syntax definition becomes a full declarative description of a language. Scannerless generalized LR parsing is a viable technique that has been applied in various industrial and academic projects.
  • CSMR 2002 [pdf, doi, bib, researchr, ]
    The reverse and reengineering research communities have a strong tradition of collecting, organizing, and unifying research results. Typical examples include an explicit taxonomy, dedicated web sites, an annotated bibliography, as well as efforts in exchange formats and tool evaluation. In this paper we describe and evaluate the use of a web authoring system to integrate such efforts. To that end, we propose the "Reengineering Wiki", which uses Wiki technology to enable web site visitors themselves to maintain and organize pages devoted to their topics of interest. This paper covers web authoring criteria, an introduction to wiki technology, typical wiki usage, and an evaluation of wiki-based systems. Moreover, the paper discusses the organization and contents of the Reengineering Wiki, and concludes with an invitation to participate in the Reengineering Wiki project.
  • Strategic programming is an idiom for generic programming where the concept of a strategy plays a central role. A strategy is a generic, dataprocessing action. Strategies are first-class citizens as witnessed by a combinator style. Two important characteristics of strategies are that they can traverse into compound data, and that they can be customized by type-specific actions. We provide a general definition of strategic programming, and we demonstrate how this idiom can be realized inside several programming language paradigms.
  • Proceedings of the 2002 ACM SIGPLAN Workshop on Rule-Based Programming, Pittsburgh, Pennsylvania, USA, 2002
    Bernd Fischer, Eelco Visser (editors).
  • ENTCS 59(4) 2001 [pdf, doi, bib, researchr, ]
    The applicability of term rewriting to program transformation is limited by the lack of control over rule application and by the context-free nature of rewrite rules. The first problem is addressed by languages supporting user-definable rewriting strate- gies. This paper addresses the second problem by extending rewriting strategies with scoped dynamic rewrite rules. Dynamic rules are generated at run-time and can access variables available from their definition context. Rules generated within a rule scope are automatically retracted at the end of that scope. The technique is illustrated by means of several program tranformations: bound variable renaming, function inlining, and dead function elimination.
  • ENTCS 57 2001 [pdf, doi, bib, researchr, ]
    Abstract programming supports the separation of logical concerns from issues of control in program construction. While this separation of concerns leads to reduced code size and increased reusability of code, its main disadvantage is the computa- tional overhead it incurs. Fusion techniques can be used to combine the reusability of abstract programs with the efficiency of specialized programs. In this paper we illustrate some of the ways in which rewriting strategies can be used to separate the definition of program transformation rules from the strategies under which they are applied. Doing so supports the generic definition of program transformation components. Fusion techniques for strategies can then be used to specialize such generic components. We show how the generic innermost rewriting strategy can be optimized by fusing it with the rules to which it is applied. Both the optimization and the programs to which the optimization applies are specified in the strategy language Stratego. The optimization is based on small transformation rules that are applied locally under the control of strategies, using special knowledge about the contexts in which the rules are applied.
  • ENTCS 44(2) 2001 [pdf, doi, bib, researchr, ]
    XT bundles existing and newly developed program transformation libraries and tools into an open framework that supports component-based development of program transformations. We discuss the roles of XT's constituents in the development process of program transformation tools, as well as some experiences with building program transformation systems with XT.
  • ENTCS 57 2001 [pdf, doi, bib, researchr, ]
    Program transformation is used in a wide range of applications including compiler construction, optimization, program synthesis, refactoring, software renovation, and reverse engineering. Complex program transformations are achieved through a num- ber of consecutive modifications of a program. Transformation rules define basic modifications. A transformation strategy is an algorithm for choosing a path in the rewrite relation induced by a set of rules. This paper surveys the support for the definition of strategies in program transformation systems. After a discussion of kinds of program transformation and choices in program representation, the basic elements of a strategy system are discussed and the choices in the design of a strategy language are considered. Several styles of strategy systems as provided in existing languages are then analyzed.
  • CC 2001 [pdf, doi, bib, researchr, ]
    The Asf+Sdf Meta-environment is an interactive development environment for the automatic generation of interactive systems for constructing language definitions and generating tools for them. Over the years, this system has been used in a variety of academic and commercial projects ranging from formal program manipulation to conversion of COBOL systems. Since the existing implementation of the Meta-environment started exhibiting more and more characteristics of a legacy system, we decided to build a completely new, component-based, version. We demonstrate this new system and stress its open architecture.
  • RTA 2001 [pdf, doi, bib, researchr, ]
    Program transformation is used in many areas of software engineering. Examples include compilation, optimization, synthesis, refactoring, migration, normalization and improvement [15]. Rewrite rules are a natural formalism for expressing single program transformations. However, using a standard strategy for normalizing a program with a set of rewrite rules is not adequate for implementing program transformation systems. It may be necessary to apply a rule only in some phase of a transformation, to apply rules in some order, or to apply a rule only to part of a program. These restrictions may be necessary to avoid non-termination or to choose a specific path in a non-con uent rewrite system. Stratego is a language for the specification of program transformation systems based on the paradigm of rewriting strategies. It supports the separation of strategies from transformation rules, thus allowing careful control over the application of these rules. As a result of this separation, transformation rules are reusable in multiple difierent transformations and generic strategies capturing patterns of control can be described independently of the transformation rules they apply. Such strategies can even be formulated independently of the object language by means of the generic term traversal capabilities of Stratego. In this short paper I give a description of version 0.5 of the Stratego system, discussing the features of the language (Section 2), the library (Section 3), the compiler (Section 4) and some of the applications that have been built (Section 5). Stratego is available as free software under the GNU General Public License from http://www.stratego-language.org.
  • Technical report UU-CS-2001-42, Department of Information and Computing Sciences, Utrecht University, 2001 [pdf, bib, researchr, ]
    Traversals over the object structure are widely used in object-oriented programming, in particular in language processing applications. The vis- itor pattern separates computation from traversal by specifying the com- putations that should be performed at each object in a separate visitor class. This makes the implementation of dierent computations reusing the same traversal scheme possible. However, navigation through the ob- ject structure is xed in the accept methods implemented by the objects that are traversed. This makes it dicult to use other navigation orders. In this paper, we introduce the Guide pattern that describes the sep- aration of navigation from computation and object structure using a double-dispatching iterator. The pattern makes it possible to implement a whole range of navigation schemes for an object-structure. Using a self- dispatching approach based on re ective method lookup such navigation schemes can be made reusable for whole classes of object-structures (im- plementing a common interface). The eciency of this approach is pro- vided by caching method lookups. We extend the approach to generic nav- igation through arbitrary object-structures using re ective eld lookup. This results in a generalization of the Walkabout class of Palsberg and Jay with a huge performance improvement in Java, making the Walka- bout usable in practice.
  • Technical report SEN-R0113, Software Engineering (SEN), CWI, 2001 [pdf, bib, researchr]
  • AMAI 29(1-4) 2000 [pdf, doi, bib, researchr, ]
    Stratego is a domain-specific language for the specification of program transformation systems. The design of Stratego is based on the paradigm of rewriting strategies: user-definable programs in a little language of strategy operators determine where and in what order transformation rules are (automatically) applied to a program. The separation of rules and strategies supports modularity of specifications. Stratego also provides generic features for specification of program traversals. In this paper we present a case study of Stratego as applied to a non-trivial problem in program transformation. We demonstrate the use of Stratego in eliminating intermediate data structures from (also known as deforesting) functional programs via the warm fusion algorithm of Launchbury and Sheard. This algorithm has been specified in Stratego and embedded in a fully automatic transformation system for kernel Haskell. The entire system consists of about 2600 lines of specification code, which breaks down into 1850 lines for a general framework for Haskell transformation and 750 lines devoted to a highly modular, easily extensible specification of the warm fusion transformer itself. Its successful design and construction provides further evidence that programs generated from Stratego specifications are suitable for integration into real systems, and that rewriting strategies are a good paradigm for the implementation of such systems.
  • Workshop on Generic Programming (WGP 2000) 2000 [pdf, bib, researchr, ]
    Many language processing operations have a generic underlying algorithm. However, these generic algorithms either have to be implemented specifically for the language under consideration or the language needs to be encoded in a generic format that the generic algorithm works on. Stratego is a language for program transformation that supports both specific and generic views of data types. A Stratego program defines a transformation on first-order ground terms. Transformation rules define single transformation steps. Transformation rules are combined into transformation \emph{strategies} by means of combinators that determine where and in what order rules are applied. These combinators include: primitives for traversal to the direct subterms of a node, allowing the definition of many kinds of full term traversals; full control over recursion in traversals; patterns as first-class citizens; generic term construction and deconstruction. These features create a setting in which it is possible to combine generic traversal with data type specific pattern matching, and separating logic (transformation, pattern matching) from control (traversal). This makes it possible to give language independent descriptions of language processing operations that can be instantiated to a specific language by providing the patterns of the relevant constructs. These generic algorithms only touch relevant constructors and do not need to know the entire datatype, making the algorithms insensitive to changes in the abstract syntax that do not affect the constructors relevant to the operation. Stratego is currently implemented by compilation to C code. All constructs of the language are implemented directly, i.e., the compiled program is as large as the specification, in contrast to approaches that rely on preprocessing or program generation which may have a scaling problem when dealing with large languages. The approach to generic programming in Stratego is illustrated by means of several examples including free variable extraction, bound variable renaming, substitution and syntactic unification.
  • Strategies in Automated Deduction (STRATEGIES'99) 1999 [pdf, bib, researchr]
  • RTA 1999 [pdf, doi, bib, researchr, ]
    Stratego is a language for the specification of transformation rules and strategies for applying them. The basic actions of transformations are matching and building instantiations of first-order term patterns. The language supports concise formulation of generic and data type-specific term traversals. One of the unusual features of Stratego is the separation of scope from matching, allowing sharing of variables through traversals. The combination of first-order patterns with strategies forms an expressive formalism for pattern matching. In this paper we discuss three examples of strategic pattern matching: (1) Contextual rules allow matching and replacement of a pattern at an arbitrary depth of a subterm of the root pattern. (2) Recursive patterns can be used to characterize concisely the structure of languages that form a restriction of a larger language. (3) Overlays serve to hide the representation of a language in another (more generic) language. These techniques are illustrated by means of specifications in Stratego.
  • TCS 199(1-2) 1998 [pdf, doi, bib, researchr, ]
    Context-free grammars are used in several algebraic specification formalisms instead of first-order signatures for the definition of the structure of algebras, because grammars provide better notation than signatures. The rigidity of these first-order structures enforces a choice between strongly typed structures with little genericity or generic operations over untyped structures. In two-level signatures level 1 defines the algebra of types used at level 0 providing the possibility to define polymorphic abstract data types. Two-level grammars are the grammatical counterpart of two-level signatures. This paper discusses the correspondence between context-free grammars and first-order signatures, the extension of this correspondence to two-level grammars and signatures, examples of the usage of two-level grammars for polymorphic syntax definition, a restriction of the class of two-level grammars for which the parsing problem is decidable, a parsing algorithm that yields a minimal and finite set of most general parse trees for this class of grammars, and a proof of its correctness.
  • ENTCS 15 1998 [pdf, doi, bib, researchr, ]
    System S is a calculus providing the basic abstractions of term rewriting: matching and building terms, term traversal, combining computations and handling failure. The calculus forms a core language for implementation of a wide variety of rewriting languages, or more generally, languages for specifying tree transformations. In this paper we show how a conventional rewriting language based on conditional term rewriting can be implemented straightforwardly in System S. Subsequently we show how this implementation can be extended with features such as matching conditions, negative conditions, default rules, non-strictness annotations and alternative evaluation strategies.
  • ICFP 1998 [pdf, doi, bib, researchr, ]
    We describe a language for defining term rewriting strategies, and its application to the production of program optimizers. Valid transformations on program terms can be described by a set of rewrite rules; rewriting strategies are used to describe when and how the various rules should be applied in order to obtain the desired optimization effects. Separating rules from strategies in this fashion makes it easier to reason about the behavior of the optimizer as a whole, compared to traditional monolithic optimizer implementations. We illustrate the expressiveness of our language by using it to describe a simple optimizer for an ML-like intermediate representation.The basic strategy language uses operators such as sequential composition, choice, and recursion to build transformers from a set of labeled unconditional rewrite rules. We also define an extended language in which the side-conditions and contextual rules that arise in realistic optimizer specifications can themselves be expressed as strategy-driven rewrites. We show that the features of the basic and extended languages can be expressed by breaking down the rewrite rules into their primitive building blocks, namely matching and building terms in variable binding environments. This gives us a low-level core language which has a clear semantics, can be implemented straightforwardly and can itself be optimized. The current implementation generates C code from a strategy specification.
  • International Workshop on Parsing Technology (IWPT 1997) 1997 [pdf, bib, researchr, ]
    Disambiguation methods for context-free grammars enable concise specification of programming languages by ambiguous grammars. A disambiguation filter is a function that selects a subset from a set of parse trees---the possible parse trees for an ambiguous sentence. The framework of filters provides a declarative description of disambiguation methods independent of parsing. Although filters can be implemented straightforwardly as functions that prune the parse forest produced by some generalized, this can be too inefficient for practical applications. In this paper the optimization of parsing schemata, a framework for high-level description of parsing algorithms, by disambiguation filters is considered in order to find efficient parsing algorithms for declaratively specified disambiguation methods. As a case study the optimization of the parsing schema of Earley's parsing algorithm by two filters is investigated. The main result is a technique for generation of efficient LR-like parsers for ambiguous grammars modulo priorities.
  • ASF+SDF 1997 [pdf, bib, researchr, ]
    User-definable strategies for the application of rewrite rules provide a means to construct transformation systems that apply rewrite rules in a controlled way. This paper describes a strategy language and its interpretation. The language is used to control the rewriting of terms using labeled rewrite rules. Rule labels are atomic strategies. Compound strategies are formed by means of sequential composition, nondeterministic choice, left choice, fixed point recursion, and two primitives for expressing term traversal. Several complex strategies such as bottom-up and top-down applica- tion and (parallel) innermost and (parallel) outermost reduction can be defined in terms of these primitives. The paper contains two case studies of the application of strategies.
  • PhD thesis, University of Amsterdam, 1997 [pdf, bib, researchr, ]
    Language prototyping is the activity of designing and testing definitions of new or existing computer languages. An important aspect of a language definition is the definition of its syntax. The subject of this thesis are new formalisms and techniques that support the development and prototyping of syntax definitions. There are four main subjects: (1) Techniques for parsing and disambiguation of context-free languages. (2) Design and implementation of a new syntax definition formalism. (3) Design of a multi-level algebraic specification formalism. (4) Study of polymorphic syntax definition.
  • Technical report P9708, Programming Research Group, University of Amsterdam, 1997 [pdf, bib, researchr, ]
    Character classes are used in syntax definition formalisms as compact representations of sets of characters. A character class is a list of characters and ranges of characters. For instance [A-Z0-9] describes the set containing all uppercase characters and all digits. One set of characters can be represented in many ways with character classes. In this paper an algebraic specification of character classes is presented. We define a normalization of character classes that results in unique, most compact normal forms such that equality of character classes becomes syntactic equality of their normal forms.
  • Technical report P9706, Programming Research Group, University of Amsterdam, 1997 [pdf, bib, researchr, ]
    In the next chapters we present the design and specification of a family of syntax definition formalisms. The kernel of this family of formalisms is formed by context-free grammars. A number of orthogonal extensions to the kernel is defined. Many of these extensions are defined in terms of the primitives of the kernel by means of normalization functions. This provides a framework for constructing new formalisms by adapting and extending previous ones. Included in the family are the following extensions of context-free grammars, uniform definition of lexical and context-free syntax, variables, disambiguation by priorities, follow restrictions and reject productions, a rich set of regular expressions defined in terms of context-free productions, character classes, aliases, parameterized modules with hidden imports and renamings. The accumulation of these extensions is the syntax definition formalism SDF. This chapter provides an introduction to SDF and gives an overview of the design and specification of the family of formalisms.
  • Technical report P9707, Programming Research Group, University of Amsterdam, 1997 [pdf, bib, researchr, ]
    Current deterministic parsing techniques have a number of problems. These include the limitations of parser generators for deterministic languages and the complex interface between scanner and parser. Scannerless parsing is a parsing technique in which lexical and context-free syntax are integrated into one grammar and are all handled by a single context-free analysis phase. This approach has a number of advantages including discarding of the scanner and lexical disambiguation by means of the context in which a lexical token occurs, Scannerless parsing generates a number of interesting problems as well. Integrated grammars do not fit the requirements of the conventional deterministic parsing techniques. A plain context-free grammar formalism leads to unwieldy grammars. if all lexical information is included. Lexical disambiguation needs to be reformulated for use in context-free parsing. The scannerless generalized-LR parsing approach presented in this paper solves these problems. Grammar normalization is used to support an expressive grammar formalism without complicating the underlying machinery. Follow restrictions are used to express longest match lexical disambiguation. Reject productions are used to express the prefer keywords rule for lexical disambiguation. The SLR parser generation algorithm is adapted to implement disambiguation by general priority and associativity declarations and to interpret follow restrictions. Generalized-LR parsing is used to provide dynamic lookahead and to support parsing of arbitrary context-free grammars including ambiguous ones. An adaptation of the GLR algorithm supports the interpretation of grammars with reject productions.
  • Technical report P9717, Programming Research Group, University of Amsterdam, 1997 [pdf, bib, researchr, ]
    Priority and associativity declarations are used to disambiguate ambiguous fragments of context-free grammars. Usually this concerns expression grammars. It is possible to describe the same language by means of an ambiguous context-free grammar only, but using auxiliary non-terminals and extra chain productions resulting in a grammar that generates different and larger trees and extra parse steps. In this paper we introduce a grammar transformation that translates a context-free grammar with priorities to a character class grammar that does only generate trees without priority conflicts. The transformed grammar has the property that each production corresponds to a production in the original grammar and that no extra productions are used. The parse trees over the transformed grammar are therefore isomorphic to parse trees over the original grammar.
  • TOSEM 5(1) 1996 [pdf, doi, bib, researchr, ]
    Good documentation is important for the production of reusable and maintainable software. For the production of accurate documentation it is necessary that the original program text is not copied manually to obtain a typeset version. Apart from being tedious, this will invariably introduce errors. The production of tools that support the production of legible and accurate documentation is a software engineering challenge in itself. We present an algebraic approach to the generation of tools that produce typographically effective presentations of computer programs. A specification of a formatter is generated from the context-free grammar of a (programming) language. These generated formatters translate abstract syntax trees of programs into box expressions. Box expressions are translated by language-independent interpreters of the box language into ASCII or TEX. The formatting rules that are generated can easily be tuned in order to get the desired formatting of programs. We demonstrate this by means of real-life applications. Furthermore, we give a practical solution for the problem of formatting comments, which occur in the original text. The formatter generation approach proposed in this article can be used to generate formatting programs for arbitrary programming environments. Our formatter generation approach can be used to automatically generate formatters that have to be programmed explicitly in other systems.
  • Language Prototyping. An Algebraic Specification Approach 1996 [pdf, bib, researchr, ]
    This chapter introduces a modular, applicative, multi-level equational specification formalism that supports algebraic specification with user-definable type constructors, polymorphic functions and higher-order functions. Specifications consist of one or more levels numbered $0$ to $n$. Level~0 defines the object level terms. Level~1 defines the types used in the signature of level~0. In general, the terms used as types in level~$n$ are defined in level~$n+1$. This setup makes the algebra of types and the algebra of types of types, etc., user-definable. The applicative term structure makes functions first-class citizens and facilitates higher-order functions. The use of variables in terms used as types provides polymorphism (including higher-order polymorphism, i.e., abstraction over type constructors). Functions and variables can be overloaded. Specifications can be divided into modules. Modules can be imported at several levels by means of a specification lifting operation. Equations define the semantics of terms over a signature. The formalism also allows equations over types, by means of which many type systems can be described. The typechecker presented in this chapter does not take into account type equations. The specification, in Asf+Sdf, of the syntax, type system and semantics of the formalism is presented in three stages: (1) untyped equational specifications (2) applicative one-level specifications (3) modular multi-level specifications. The definition of a typechecker for stages (2) and (3) is divided into four parts: (a) well-formedness judgements verifying type correctness of fully annotated terms and specifications, (b) non well-formedness rules giving descriptive error messages for the cases not covered under (a), (c) a type assignment function annotating the terms in a plain specification with types, and (d) a typechecking function which checks well-formedness after applying type assignment. These functions are defined uniformly for all levels of a specification. Aside of defining a new specification formalism, this chapter illustrates the use of Asf+Sdf for the design and prototyping of sophisticated specification formalisms.
  • ASF+SDF 1995 [pdf, bib, researchr, ]
    In this paper we design a syntax definition formalism as a family of formalisms. Starting with a small kernel, several features for syntax definition are designed orthogonally to each other. This provides a framework for constructing new formalisms by adapting and extending old ones. The formalism is developed with the algebraic specification formalism ASF+SDF. It provides the following features: lexical and context-free syntax, variables, disambiguation by priorities, regular expressions, character classes and modular definitions. New are the uniform treatment of lexical syntax, context-free syntax and variables, the treatment of regular expressions by normalization yielding abstract syntax without auxiliary sorts, regular expressions as result of productions and modules with hidden imports and renamings.
  • Proceedings of the ASMICS Workshop on Parsing Theory 1994 [pdf, bib, researchr, ]
    An ambiguous context-free grammar defines a language in which some sentences have multiple interpretations. For conciseness, ambiguous context-free grammars are frequently used to define even completely unambiguous languages and numerous disambiguation methods exist for specifying which interpretation is the intended one for each sentence. The existing methods can be divided in `parser specific' methods that describe how some parsing technique deals with ambiguous sentences and `logical' methods that describe the intended interpretation without reference to a specific parsing technique. We propose a framework of \em filters\/ to describe and compare a wide range of disambiguation problems in a parser-independent way. A filter is a function that selects from a set of parse trees (the canonical representation of the interpretations of a sentence) the intended trees. The framework enables us to define several general properties of disambiguation methods. The expressive power of filters is illustrated by several case studies. Finally, a start is made with the study of efficient implementation techniques for filters by exploiting the commutativity of parsing steps and filter steps for certain classes of filters.
  • Technical report P9420, Programming Research Group, University of Amsterdam, 1994 [pdf, bib, researchr, ]
    We define a translation from an intermediate box language for pretty printing to TeX. This translation can be used as a back-end for pretty printers in documentation tools for programming languages. The translation is formulated in an executable algebraic specification formalism. An important aspect of the translation is the transformation of boxes according to a set of equations. These equations preserve the text formatting semantics of boxes which is also defined algebraically. New in this approach is that algebraic transformations of box terms are used to circumvent the limitations of the typesetter. The TeX generator, which translates the box language to TeX code, is a component of documentation tools generated for the programming environments developed with the ASF+SDF meta-environment, but can also be used as a separate tool. As a case study, the construction of a typesetter for the process specification formalism PSF is shown.
  • During the development of the ASF+SDF compiler it became clear that a simplification of the back-end could be achieved by eliminating associative lists and conditional equations in an early phase of compilation. In this thesis the transformation of conditional equational term rewrite systems with \em associative lists to term rewrite systems without lists is described by means of an algebraic specification. To make as much of the specification reusable the specification is written in a style called {\em Combinatory Algebraic Specification}. In this style it is possible to define polymorphic higher-order functions (or combinators) and use these as arguments to functions and as results of computations. Using this style a vast library of general purpose combinators is developed. The specification of the combinatory language, the library of combinators and the specification of the transformation are written in a \em Literate Specification style, meaning that the specification and the text explaining it are integrated.