Chapter 5. Internal DSL design in Ruby, Groovy, and Clojure – DSLs in Action

Chapter 5. Internal DSL design in Ruby, Groovy, and Clojure

This chapter covers

  • Making DSLs concise using duck typing and metaprogramming
  • Implementing a trade-processing DSL in Ruby
  • Improving our order-processing DSL using Groovy
  • Thinking differently about the trade-processing DSL in Clojure
  • Common pitfalls with each language

The best way to learn new paradigms and design techniques is to look at real implementations using the best languages that support them. In chapter 4, you saw quite a few idioms and patterns that can help you develop expressive internal DSLs. For this chapter, I’ve selected three of today’s most popular languages on the JVM. You’re going to go through the exercise of building real-world DSLs using them.

Before going into the details of what we’re going to do, figure 5.1 is a roadmap that I plan to follow in this chapter.

Figure 5.1. Roadmap of the chapter

The languages I’ve selected for this chapter are dynamically typed. I start the chapter by discussing some of the attributes of dynamically typed languages that make good DSLs and why I selected Ruby, Groovy, and Clojure for our discussion. Then we’ll jump right into the implementation part and take you successively through the process of implementing complete DSLs using each of the three languages. We discuss the main features that these languages offer that you’ll frequently use in designing DSLs and some of the rationale for when to select which pattern for your implementation.

At the end of this chapter, you’ll have an overall idea of how to approach designing a DSL in languages that offer similar capabilities. Because we’ll implement complete DSLs, you’ll learn how to think in terms of the DSL that you design and you’ll program your implementation language to fit the syntax that you want to provide to the user.

This chapter is going to be programming intensive; be prepared for lots of code coming your way and have your language interpreters handy. The examples are small and illustrative and I promise you’ll have as much fun trying them out as I had writing them. The book’s appendixes contain a short refresher for each of these three languages. Feel free to peek for a bootstrap in case you’re unfamiliar with any of them. If you’re new to the concept of development using multiple languages (also known as polyglot development), there’s an introduction for bootstrapping in appendix G. But before we start in on the details, let’s look at the rationale behind choosing the languages that I did.

5.1. Making DSLs concise with dynamic typing

One of the important attributes that an internal DSL adds on top of the underlying language is an enhanced readability of the domain semantics. The internal DSL translates the implementation to the domain user in terms of the language that he understands.


When your nonprogrammer domain expert looks at a DSL script, he should be able to understand the domain rules from it. This result is the real value that a DSL adds to improving the communication path between the developer and the domain person. I’m not evangelizing that every nonprogrammer domain person should be able to write programs using the DSL, but he should at least be able to understand the domain semantics from a DSL snippet.


A program written in a dynamically typed language doesn’t contain type annotations; by nature it’s visually less noisy and tells you what the programmer intends to do. This leads to better readability of the code, one of the prime attributes that differentiates any typical API from a DSL. In the following subsections, I’ll discuss three of the most important characteristics that shape DSLs developed using dynamically typed languages:

  • Enhanced readability, because there are no type annotations (section 5.1.1)
  • Duck typing, which refers to the way you think of designing contracts in your DSLs (section 5.1.2)
  • Metaprogramming, one way to get rid of boilerplate code from your DSL implementation (section 5.1.3)

5.1.1. Readability

As a DSL reader, you expect the language to flow smoothly, without any unnecessary complexity. The type system of a programming language can potentially add to the accidental complexity of a DSL. If you implement your internal DSL in a language that has a verbose type system like Java, there’s a good chance that the resulting DSL will require you to plug many unnecessary type annotations into your abstractions. In a dynamically typed language, you don’t need to provide type annotations, so the intent of the programmer is much clearer than it is in an alternative implementation in a corresponding statically typed language. Still, things won’t necessarily be easier when it comes to understanding the implementation behind the intent. (You’ll see more examples of this when we discuss common pitfalls of dynamic language-based DSL implementation in section 5.5.) On the whole, dynamically typed languages offer a more succinct syntax,, which results in enhanced readability of the DSL and its implementation.

Although the readability of a DSL is obvious, there’s another aspect of dynamic languages that plays an important role when you design and implement internal DSLs. They’re low in ceremony and rich in semantics. They’re also quite a bit more concise than statically typed languages and, although you’re still dealing with abstraction hierarchies, the thinking behind them is different. I’m referring to the way an abstraction responds to a message that’s sent to it.

5.1.2. Duck typing

Dynamic typing isn’t necessarily weak typing. You invoke a message on an object, and if the object satisfies the contract that the message asks for, you get a response; otherwise, the message propagates up the object hierarchy chain until one of its ancestors satisfies the contract. If it reaches the root without any handler capable of responding to the message, you get a NoMethodError. There’s no compile-time check that statically determines whether a message invoked on an object is valid. Rather, you can change the set of methods and properties to which an object responds to during runtime. For any specific message, if an object supports the message at the time it’s invoked, it’s considered to be a valid invocation. This process is typically known as duck typing, and is implemented in languages like Ruby, Groovy, and Clojure. You can also implement duck typing in statically typed languages like Scala; we’ll discuss that in chapter 6.

Implementing Polymorphism with duck typing

What does duck typing in dynamic languages buy you when you’re implementing DSLs? Once again, the simple answer is that you get a concise implementation at the expense of static type safety. You don’t need to statically declare interfaces or have inheritance hierarchies to implement polymorphism. As long as the receiver of a message implements the right contract, it can respond to the message meaningfully. Figure 5.2 shows how to implement polymorphism using duck typing.

Figure 5.2. Polymorphism through duck typing. The abstractions Foo and Bar don’t have any common base class, but we can treat them polymorphically in languages that support duck typing.

Now let’s look at an example from our domain. First, we’ll do a Java implementation using interfaces and then demonstrate the conciseness that duck typing offers with an implementation in Ruby.

A Trade Domain Example

Trades that are executed in the stock exchange can be of various types, depending on the type of instrument being traded. (I’m sure you’re now familiar with trades, instruments, and executions. If you’ve forgotten some of the concepts, refresh your knowledge by reading the sidebars in earlier chapters.) A security trade involves trading equities or fixed incomes. A forex trade involves exchanging foreign currencies in the form of spot or swap transactions. With a statically typed language like Java, you would typically model the two abstractions as specializations of an interface, say, trade. You would also define the inheritance chain statically, as in the following snippet:

interface Trade {
  float valueOf();

class SecurityTrade implements Trade {
  public float valueOf() { //..  }

class ForexTrade implements Trade {
  public float valueOf() { //.. }

Now, if you have a method that needs to calculate the cash value of any kind of Trade supplied to it, you’ll implement it as:

public float cashValue(Trade trade) {

Here, the argument to cashValue is constrained to the upper bound of the static type that implements the valueOf method. It’s statically checked by the Java compiler. Now let’s compare this implementation to one that does a cash_value with duck typing, as shown in the following listing.

Listing 5.1. Polymorphism with duck typing
class SecurityTrade
  ## ..
  def value_of
    ## ..

class ForexTrade
  ## ..
  def value_of
    ## ..

def cash_value(trade)

No extras clutter up this implementation, and there’s no static inheritance relationship; cash_value works as long you give it something that implements a value_of method. You might be thinking, what if I send it an unrelated object that doesn’t implement the value_of method. It blows up during runtime, of course. That’s the reason you should have a comprehensive coverage of unit tests to test your contracts.

For unit testing, you can create mocks easily too, because you don’t have to jump through the hoops of ensuring static type safety. Remember, with languages that offer duck typing, you don’t test for types and you don’t try to emulate static typing with your dynamic language. It’s a different way of thinking about abstraction design. Your test suites will test whether your abstractions implement the contract that they’re supposed to provide to your clients.

Duck typing makes you write code that’s free of statically checked constraints. One immediate effect is that your DSL implementation becomes much more concise, but at the same time your intentions are clear. We discussed this in section 4.2.2 when we implemented expressive decorators through dynamic mixins in Ruby. The final DSL looked like this:'r-123', 'a-123', 'i-123', 20000).with TaxFee, Commission

Note that we mixed in the modules TaxFee and Commission with Trade, and we used Ruby’s duck typing to compute the total cash value of the trade.

Next up, let’s revisit one technique that we saw in chapter 4. Metaprogramming is used by dynamically typed languages to save you from writing repetitive boilerplate code in your applications.

5.1.3. Metaprogramming—again!

Besides making your code free of type annotations, how does dynamic typing lead to concise DSLs? One obvious answer is by keeping you from writing repetitive code structures and instead generating them through the machinery of the language itself. Having a concise DSL API is as important to the DSL user as having a concise implementation is to you, the DSL implementer. Using their capabilities to introduce new methods and properties at runtime, both Ruby and Groovy have awesome metaprogramming facilities, which we discussed in sections 2.3.1 and 4.2. Let’s return to one example to reiterate the conciseness that dynamic typing offers for DSL implementations. The following listing demonstrates how you can use runtime metaprogramming and closures to implement an XML builder in Groovy.

Listing 5.2. XML builder in Groovy: the power of dynamic metaprogramming

In this example, Groovy’s MarkupBuilder doesn’t know anything about the methods order, instrument, quantity, or price . The language runtime uses dynamic method dispatch and employs Groovy’s methodMissing() hook to intercept all undefined method calls. You can use similar techniques in Ruby. Dynamically typed languages provide an interceptor for all undefined methods. This technique makes programs much more concise and dynamic, but also preserves the expressiveness that you need.

We’ve just looked at the three attributes that you associate with a DSL that’s implemented using a dynamic language. The first one, readability, describes the surface syntax of the DSL script. The other two attributes, duck typing and metaprogramming, have more to do with the underlying implementation techniques. Let’s find out what features Ruby, Groovy, and Clojure possess that help you create and implement expressive DSLs.

5.1.4. Why Ruby, Groovy, and Clojure?

Ruby, Groovy, and Clojure each possess all three attributes of dynamically typed languages that make them great hosts for implementing internal DSLs. Table 5.1 contains an overview of these language features.

Table 5.1. Ruby, Groovy, and Clojure features that make them great choices for your internal DSLs


Duck typing


Ruby Flexible syntax, no type annotations, and strong literal support. Supports duck typing and you can use responds_to? to check whether a class responds to a specific message. Has strong support of reflective and generative metaprogramming.
Groovy Flexible syntax, optional type annotations, and strong literal support. Supports duck typing; you have some polymorphism without a common base class. Has strong support for runtime metaprogramming through Groovy metaobject protocol (MOP).
Clojure Syntax is flexible but bound by the prefix form of expressions as in other Lisp variants. You can provide optional type hints to speed up method dispatch, which avoids reflection in Java calls. Supports duck typing as in Ruby or Groovy. Implements compile-time metaprogramming through macros. Clojure is malleable enough to be extended as per the requirements of your DSL.

Even though Ruby, Groovy, and Clojure have some of the same characteristics, they’re different enough for us to discuss them separately in the context of DSL implementation. All of them run on the JVM, have strong metaprogramming support, and are fast becoming mainstream development languages. Yet one of the areas in which they differ is the way they integrate with the JVM. Figure 5.3 summarizes some of the areas where the three languages are alike, as well as those where they differ.

Figure 5.3. Ruby, Groovy, and Clojure present an interesting mix for DSL implementation

In this chapter, we’ll explore internal DSL implementation in all three languages. In the course of our discussion, we’ll see the features that each of these languages offer and also look back at how they map to the implementation of the patterns we discussed extensively in chapter 4.

5.2. A trade-processing DSL in Ruby

We’re going to develop a complete use case in this section. We’ll implement a complete DSL for making new security trades and compute their cash values using pluggable business rules. After you execute the DSL, you’ll get an instance of a Trade abstraction that you can use in various ways, depending on your application’s functionalities. We’ll start with a modest implementation and make incremental changes, making it more and more expressive and domain rich. Figure 5.4 shows a roadmap of what we’ll do in each iteration as the DSL evolves.

Figure 5.4. How we’ll enrich our Ruby DSL to implement trade processing. At every stage, we’ll make the DSL richer by using the abstraction capability that Ruby offers and add more domain functionality.

Throughout our journey, Bob will act as our mentor, pointing out all the inadequacies and areas of improvement and helping us mold our design into the shape that fits into the glove of an expressive DSL. It’s up to Ruby to help us comply with Bob’s requests.


Code Assistance

In all of the following sections that have rich code snippets, I’ll include a sidebar that contains the prerequisites of the language features that you need to know to appreciate the implementation details. Feel free to refer to the appropriate language cheat sheet in the appendixes before you proceed.


Keep our goal in mind: Bob should be able to understand the DSL and verify whether it violates any of his business rules.

5.2.1. Getting started with an API

API designs start out rather rusty. If you’re working with a dynamic language, you always start with a body of clay and mold it iteratively to make it more expressive.


Ruby tidbits you need to know

  • How are classes and objects defined in Ruby? Ruby is object-oriented (OO) and follows the usual notion of any other OO language to define a class. Ruby does have its own object model that has functionalities that let you change, inspect, and extend objects during runtime through metaprogramming.
  • How do you use the hash to implement a variable argument list? In Ruby, you can pass a hash as an argument to a method to emulate keyword arguments.
  • Basics of Ruby metaprogramming. The Ruby object model has lots of artifacts like classes, objects, instance methods, class methods, singleton methods, and so on, that enable reflective and generative metaprogramming. You can dig into the Ruby object model at runtime and change behaviors or generate code dynamically.


Consider the following code snippet that our API designers came up with as the first version of the DSL:

Bob saw this and yelled, “Hey! This looks too technical for me. What are those weird constructs that I need to invoke to get an instrument? That’s not how I interpret an instrument when I get a trade.”

Bob has a point, which I’ll address shortly. But before I do, let me reiterate that a DSL never comes out right the first time. A DSL always evolves iteratively. That snippet is still an ordinary API with the usual readability that Ruby offers. It doesn’t feel like a fluid sentence that Bob can roll off his tongue while he’s tending to his usual chores in the trading business. Even so, this code gives us the baseline from which we’ll move forward.

The base abstractions

Every DSL design starts with a set of basic abstractions, on which you build your domain-friendly language. We’ll call this approach bottom-up programming, where larger abstractions grow from smaller pieces and ultimately end up with the expressiveness that your domain expert wants.

We’ll start our DSL design with a set of APIs for basic domain entities like Security-Trade and Instrument. The following listing provides the base Ruby abstractions that implement it.

Listing 5.3. SecurityTrade in Ruby (Iteration 1)

In this listing, notice the hash h in the create class method that’s used to provide the named arguments for unitprice, principal, and tax. Using a hash to implement named arguments is a common idiom in Ruby. Another interesting trick that’s employed is in , where we use metaprogramming to set up the implicit context of the receiver and populate the trade instance with values from the hash h. We discussed how to set up an implicit context in section 4.2.1.

Listing 5.4 is the implementation of the Instrument class. There’s nothing fancy about it, except that we’re not making it immutable yet. For the current version of the DSL, you could’ve made it an immutable value object. We’ve kept it a mutable object for reasons that’ll be clear to you in the next section when we use its mutability to come up with an expressive instrument creation DSL.

Listing 5.4. Instrument traded in Ruby
class Instrument
  attr_accessor :name, :quantity
  def initialize(name)
    @name = name

  def to_s()
    "(Name: " + @name.to_s            +
    "/Quantity: " + @quantity.to_s    + ")"

The final piece of this section is the class TradeDSL, which is just a skeleton of things to follow:

require 'security_trade'
class TradeDSL
  def new_trade(ref_no, account, buy_sell, instrument, attributes)
    SecurityTrade.create(ref_no, account, buy_sell, instrument, attributes)

Our DSL has just started taking its first steps. As we proceed with the iterations in the following sections, you’ll notice how TradeDSL evolves in expressiveness as we add more and more functionalities to it.

A DSL facade

The class TradeDSL also demonstrates the important technique of how you can decouple the DSL syntax from the underlying implementation. On the one hand, this class offers the surface syntax of the DSL to the user. On the other hand, it wraps the base abstractions to provide a layer on top of the underlying implementation. Figure 5.5 illustrates this aspect of DSL structure.

Figure 5.5. A DSL facade offers an expressive API to the user. It also keeps the core implementation structures from being exposed.

Remember, when you design a DSL, be sure to provide a single point of interaction to the user. In this context, the TradeDSL class plays the role of a DSL facade. Currently, it only wraps the create method of the SecurityTrade class. In course of our subsequent iterations, we’ll build up enough meat in this abstraction so that it becomes self-sufficient and caters to the users’ requirements. But right now we need to deal with Bob’s problems with the instrument creation part of the DSL. Here’s where a little bit of monkey business can come in handy.

5.2.2. A little bit of monkey-patching

The next step in the evolution of the TradeDSL class is to make it easier for Bob to create an instrument. He needs to be able to ask for 100 shares of IBM the way he’s used to doing on his trading desk. The result we want is something like the following, which shows the trade creation DSL that identifies the instrument being traded. 'T-12435',
  'acc-123', :buy, 100.shares.of('IBM'),
  'unitprice' => 200, 'principal' => 120000, 'tax' => 5000

The voodoo that Bob previously had to deal with to create an instrument using unnecessary syntactic constructs is gone, and is replaced by a more natural language that Bob speaks in his regular trading business: 100.shares.of('IBM'). Now Bob’s pretty happy! How did we achieve that?

Listing 5.5 is an implementation of the methods shares and of that we’re silently introducing as methods of the Numeric class. Numeric is a built-in class in Ruby, but you can open any class and introduce new properties or methods into it. People call this monkey patching, and many detractors discourage this practice. As with any other superpower, monkey patching has risks and pitfalls. Any standard Ruby text (see [1] in section 5.7) will warn you when you’re overstepping your limits. But when you use it judiciously, monkey patching makes your DSL flow.


Ruby tidbits you need to know

Monkey patching means introducing new properties or methods into an already existing class. In Ruby, you can open up an existing class and introduce new methods or properties that augment its behavior. This is a powerful feature; so powerful that you might be tempted to misuse it.


Listing 5.5. Instrument DSL using monkey patching

This listing completes our first iteration toward shaping up our trade DSL. Note how our DSL is getting more expressive as the core abstractions evolve into larger wholes. We’ve removed the noise that was generated when we created the instrument in the snippet at the beginning of section 5.2.1. But we still have quite a few syntactic oddities when we consider the natural language of expression that Bob wants. With Ruby, we can push the limits even further. Our TradeDSL facade is lean enough to go for it. In the next section, we’ll flesh it out with more syntactic sugar for the final DSL that Bob will use.

5.2.3. Rolling out a DSL interpreter

What is expressive enough? The answer to this question is debatable, depending on the perspective of your DSL user. To a user who’s a programmer familiar with Ruby, the DSL that we came up with in iteration 1 would likely qualify as a fairly expressive one. Even a nonprogramming domain expert can figure out what’s going on at the macro level, though he might be a little irritated with the additional syntax that it has. With a language as expressive as Ruby, we can push it to the limit and try to make it more aligned with the way Bob speaks at his trading desk.


Ruby tidbits you need to know

  • How to define multiline strings using “here” documents. Use this technique when you want to define a string literal in place within the source code instead of externalizing it elsewhere.
  • How to define class methods. Class methods (or singleton methods) are instance methods of the Ruby singleton class. For more details, look at [1] in section 5.7.
  • Using evals in Ruby and how they work with metaprogramming. One of the most powerful features of Ruby is its ability to evaluate a string or a block of code dynamically during runtime. You get a number of flavors of evals that you can use in various contexts.
  • Regular expression processing in Ruby. Ruby has built-in support for regular expressions, which is extremely useful in pattern matching and text processing.


Adding an Interpreter

We’ve already developed a fairly expressive syntax for TradeDSL in section 5.2.2 that also nicely captures the domain semantics. Still, it looks too technical for Bob, who’s used to a more fluid expression of the trading language in his domain.

In our second iteration, we’re going to roll out an interpreter that’ll interpret Bob’s language, chop off the frills, and extract the essence needed to build the necessary abstractions. Here’s how it’ll look when we’re done:

  new_trade 'T-12435' for account 'acc-123'
                      to buy 100 shares of 'IBM',
                      at UnitPrice=100, Principal=12000, Tax=500

puts str

Now that we have the core abstractions in place, we’re going to start adding to the syntactic sugar of our DSL. As promised earlier, the language for trade processing is steadily evolving.

What do we need to add to our TradeDSL class to make it feel like the code in the previous snippet? Listing 5.6 is another iteration of TradeDSL, the facade that we talked about in section 5.2.2. It rolls out a small interpreter that processes the user input before passing it on to SecurityTrade.

Listing 5.6. Trade DSL in Ruby, as an interpreter (Iteration 2)

Before going through the details of what this code does, let’s look at a diagrammatic representation of how Bob’s language is being interpreted. Figure 5.6 traces this sequence.

Figure 5.6. How a sample TradeDSL script is interpreted by the code in listing 5.6 to generate Ruby objects. An instance of security_trade is generated through the DSL interpreter.

Try to understand the way this figure corresponds with the DSL implementation in listing 5.6. Recognize any of the techniques that we discussed in chapter 4? Well, in the listing, we have quite a few of them embedded within the code. The techniques recur from time to time in various forms and implementations. Look at the following list to discover some of them:

  • Method const_missing uses runtime metaprogramming (discussed in section 4.4) to convert any undefined constants to strings.
  • instance_eval in method interpret sets up the implicit context (discussed in section 4.2.1) of an instance of TradeDSL for executing the method new_trade
  • Method parse uses regular expressions to process the user input and converts it into a form suitable for invoking the instance method new_trade

For a more detailed discussion about Ruby metaprogramming techniques, see [5] in section 5.7.

Speaking Bob’s language

Consider all this from a DSL user’s point of view. He can use this DSL to write trade generation snippets using the same language that he does in his everyday business. We’ve provided some bubble words in the DSL to make it more aligned with his normal vocabulary. As a user, Bob can now enter these DSL strings into a file that he can load and process to generate instances of SecurityTrade. Even when he gets trade data from upstream front-office systems, he can use this DSL to generate instances of SecurityTrade and save it to his database.

In the next section, we’ll enhance the DSL to incorporate a few business rules and make it more friendly to the users who are programmers and who want to enrich the trades that Bob generates so they can be used in the next step of the trading cycle.

5.2.4. Adding domain rules as decorators

Although Bob is happy with the current form that generates trades, he has some concerns about the next step of the trading cycle where we need to enrich the trades using some of the domain rules. We assured him that we’re working on it and will get back to him as soon as we’ve reached a level of expressiveness that he can comprehend. Let’s discuss this iteration, which enhances the DSL and enriches the trade.


Ruby tidbits you need to know

  • How to define and use Ruby blocks. Blocks are used to implement lambdas and closures in Ruby.
  • How you implement mixins using Modules. Ruby modules are yet another way to group artifacts that can be included in your classes as mixins.
  • How you chain mixins to design decorators
  • Duck typing. In Ruby, an object responds to a message if it implements the method by that name. Whether the object implements the method is not statically checked; you can change the object during runtime. If it quacks like a duck, it is a duck in Ruby.


Trade DSL –where we stand now

We’ve already discussed quite a bit about our evolving DSL. Before we add to the trade enrichment part, let’s step back and look at where we stand. Figure 5.7 says it all. We’ve developed the trade generation script that produces an instance of Security-Trade. As part of trade enrichment, we’ll add business rules that are candidates for being modeled as a DSL.

Figure 5.7. We’ve developed the DSL for trade generation. Now we’ll add business rules as DSLs to compute cash value of the trade.

When the trades reach the back office of a securities trading organization, cash values and static data need to be added so that they can be passed in to the next step in the processing pipeline. In section 4.2.2, we discussed how to compute the cash value of trade, also known as the net settlement value. After we receive trades in the back office, we need to invoke domain rules on the trades to compute their cash values. These domain rules vary across stock exchanges, the type of instruments traded, and a number of other factors. To keep our current scope simple, we’re assuming a fixed set of rules. We’re going to enrich our DSL to invoke those rules on the generated trades.

Implementing domain rules

The following rules apply to the trades that Bob generates:

  • The cash value of a trade depends on the principal amount, the tax or fee amount, and the broker commission amount
  • If the incoming trade stream contains any of these amounts, we’ll honor them; otherwise, we need to compute them from the individual trades as per the following business rules.
  • The following business rules apply to every trade:

    • The principal amount is the product of the unit price and the quantity, both of which are parts of the trade object.
    • The tax or fee is calculated as a fixed percentage of the principal amount.
    • The broker commission is calculated as a fixed percentage of the principal amount.

With these rules as part of the implementation, the following listing shows how the DSL is being used by the users to enrich trades.

Listing 5.7. Using the Trade DSL generates an instance of SecurityTrade that gets passed into the Ruby block . In the block, the trade is enriched as a side effect of mutating the instance that it takes . All this is disciplined and idiomatic Ruby programming. We’re using the conciseness that Ruby offers, along with the domain semantics that we add to the language to make it more expressive.

Notice how we’re making the domain rules pluggable in our DSL in this code by abstracting the computation logic of TaxFee and BrokerCommission. All the DSL user needs to do is wire up the necessary components with the CashValueCalculator class . The technique that we use for wiring them up is called mixin-based programming, which we already discussed in section 4.2.2. Here, the mixins act as decorators of the main class CashValueCalculator.

To make the method accept an additional block as an argument, we need to make the following small change. The rest of the DSL remains the same.

Listing 5.8. Trade DSL in Ruby: blocks for side effects (Iteration 3)

Now let’s go back to listing 5.7 and look into the implementation of the decorators that we added transparently to the CashValueCalculator instance.

Ruby DSL with decorators

Listing 5.7 shows an instance of how you can add syntactic sugar on top of core abstractions like TaxFee and BrokerCommission. And unlike static languages, we can do all this dynamically through the magic of metaprogramming. The following listing implements the complete DSL that computes the cash value of a given trade.

Listing 5.9. Calculating the cash value of the trade

Aha! Now we have the DSL implementation ready with a friendly surface syntax that Bob can understand, and an expressive implementation that speaks the language of the domain. Table 5.2 contains a quick recap of how this Ruby implementation of the DSL embodies the three attributes of dynamically typed languages that we talked about in section 5.1.

Table 5.2. Dynamic languages and the Ruby DSL


Supporting Ruby features shown in listing 5.9

Readability Malleable syntax, array literals, and optional parentheses make the code in initialize method clear and concise. It clearly advertises the domain rule that asks us to honor the cash value components if they come with the input trade, and to calculate otherwise. The total cash value of the trade is computed implicitly by the modules that you mix in with the CashValueCalculator instance. The DSL in listing 5.7 nicely abstracts away the implementation of the net cash value calculation, while explicitly telling the user which components take part in the computation. In fact, the user’s going to supply the components that he wants to use in computing the final net value.
Duck typing Note how the value method in TaxFee and BrokerCommission uses super without any static inheritance relationship. This is an example of duck typing. You can plug in any module that has a value method and things will be chained in magically.
Metaprogramming The with method acts as the combinator that lets us compose the mixins through runtime extensions of the participating modules.

This completes the Ruby implementation of the trading DSL. I set up one problem from a real-life use case at the beginning of the section and demonstrated how you can solve it using a DSL-based approach. Now that you’ve implemented it, it appears to be the most idiomatic way to implement the domain functionality that we set out to model. We used Ruby, exploited its powers of flexible syntax, duck typing, and metaprogramming, and finally arrived at a language that a domain expert can comprehend. As we completed the implementation step-by-step, I highlighted all the features that make Ruby a great language for internal DSL implementation. The idea wasn’t to show off the power of Ruby, but to reiterate how a DSL-based approach can complement a powerful language to make extensible abstractions.

In the next section, we’ll talk about DSL implementation in another language that, like Ruby, offers dynamic typing and has powerful metaprogramming abilities, but also has a more seamless model of integration with the JVM. You used this language in chapters 2 and 3 when we designed an order-processing DSL using it. It’s Groovy, and we’ll use it to improve on your earlier implementations of the same DSL.


Have you started wondering why we’ve been looking at so many languages when most of the time you’ll be using only one for your development? In real-life application development, if you’re designing DSLs, ideally you should be using the language that best fits the solution domain. Remember, it’s the DSL syntax and semantics that matter the most; the language you use for implementation is only a means of getting there. The richer the set of idioms up your sleeve, the more options you have to use when you’re designing your DSL.


5.3. The order-processing DSL: the final frontier in Groovy

Groovy as a language offers capabilities that are similar to Ruby’s: dynamic typing and strong runtime metaprogramming power. The main difference between the two languages is that Groovy shares the object model with Java, which means that it has more seamless integration capabilities than Ruby. In fact, Groovy is often touted as a DSL for Java. For this reason, choose Groovy as the implementation language when you’re designing DSLs that need to fit in the ecosystem of a Java application. Both Ruby and Groovy offer similar capabilities as hosts for implementing DSLs. But by virtue of sharing the object system with Java, Groovy offers better integration capabilities.

In this section, we’ll revisit the order-processing DSL that you implemented first in chapter 2 and worked with again in chapter 3. We won’t focus on the features of Groovy that we’ve already discussed while implementing the trade DSL in Ruby. We’ll talk more about one single, stand-out feature in Groovy metaprogramming that you’ll use often when you’re designing an internal DSL.

We’ll start with a brief recap of the earlier iterations of the order-processing DSL. Then I’ll identify the drawbacks and we’ll improve on our earlier attempts until we have the final version of implementation.

5.3.1. The order-processing DSL so far

We’ve already discussed quite a few options for Groovy implementations. Figure 5.8 offers a brief recap.

Figure 5.8. A look at the alternatives we implemented in our order-processing DSL in earlier chapters

In section 2.2.3, we did an end-to-end Groovy implementation that executed the DSL from Groovy using GroovyShell. GroovyShell takes the DSL definition as well as the script and executes it using the evaluate method. In section 3.2.1, we changed the DSL and used Java 6 scripting engine APIs to eval the DSL. In section 3.2.3, we explored yet another option that was an improvement over the one we used in section 3.2.1. Instead of using the Java ClassLoader, we used GroovyClassLoader from within the Java application to load the DSL for order processing.

All the options that we’ve explored so far have a common drawback, related to the way we used Groovy metaprogramming concepts. In this section, we’ll improve our earlier attempts by implementing a better model of Groovy metaprogramming to drive your DSL.

5.3.2. Controlling the scope of metaprogramming

In all the earlier approaches to this DSL, we injected methods to existing Groovy classes by adding methods to their MetaClass.


Groovy tidbits you need to know

  • ExpandoMetaClass and how it does metaprogramming. A special artifact of Groovy metaprogramming that lets you dynamically add methods, constructors, properties, and static methods using a neat closure syntax.
  • Closures and delegates. A closure in Groovy is a lambda that can be defined in one place and executed somewhere else, much like with Ruby blocks. The delegate is usually the enclosing object of the closure, but you can change it during runtime.
  • Class declaration in Groovy. It’s similar to Java, minus the verbosity of types. You also get that Groovy concise syntax.
  • How Groovy categories manage the scope of metaprogramming. Categories in Groovy are an alternative to ExpandoMetaClass for metaprogramming. Using categories, you can control the scope within which the changes to the meta-objects are visible within your application.


Look at this snippet from listing 3.1 where we added properties like shares and of to the Integer class:

Integer.metaClass.getShares = { -> delegate }
Integer.metaClass.of = { instrument ->  [instrument, delegate] }

That code lead us to write DSL scripts as follows (from listing 3.2):'IBM')) {
  limitPrice   300
  allOrNone    true
  valueAs      {qty, unitPrice -> qty * unitPrice - 500}

We did this injection using Groovy’s ExpandoMetaClass, which lets you add methods, properties, constructors, and static methods to an existing class during runtime. The problem with Groovy’s ExpandoMetaClass is that the properties or methods that you inject to a class are available globally. When you’re writing an application, it might not be a recommended social practice to change the behaviors of all instances of a class across all the threads of the JVM. ExpandoMetaClass does this, which makes your changes to a class visible to all other users. Global changes are also an issue with Ruby monkey patching, and can have adverse impacts on other users, introducing incompatibilities in the ways they look at the class and method definitions.

A fine-grained control over the scope of metaprogramming is a feature that you should always keep in mind when you’re implementing Groovy DSLs. This is precisely the reason why we have a separate section about Groovy implementation.

The Groovy MOP and categories

The Groovy MOP gives you yet another option for making smart and controlled injections into existing classes. But instead of making these added properties visible globally, it restricts the scope to within a block of code. You define classes, called categories, where you define additional methods that you want to inject. Programmers use categories extensively in Groovy to produce expressive DSLs. (For a more detailed explanation of Groovy categories, see [2] in section 5.7.) Let’s use categories and re-engineer our order-processing DSL to its new, improved form. The basic abstraction that captures an Order in Groovy is shown in the following listing.

Listing 5.10. Order class in Groovy

As part of our DSL, we need to give the user the flexibility of a little language for expressing the quantity of shares that he wants to buy or sell as 200.IBM.shares. We’ll do this using Groovy categories. But we need a helper class that abstracts this expression and allows the user to include the rest of the order description as a closure. Let’s call this class Stock. Here’s the class definition:

Before proceeding any further with the implementation, let me introduce the new order-processing DSL in use, so that you can follow the implementation as we move ahead. Here’s the DSL script that Bob can use for ordering his stock transactions.

Listing 5.11. Order-processing DSL script
buy 200.GOOG.shares {
    limitPrice 300
    valueAs {qty * unitPrice - 500}

buy 200.IBM.shares {
    limitPrice 300
    valueAs {qty * unitPrice - 500}

buy 200.MSOFT.shares {
    limitPrice 300
    valueAs {qty * unitPrice - 500}

In this listing, we need to add methods to class Integer. We’ll do that using Groovy categories this time.

The basic DSL

The first category is shown in the following listing. This category will help us build instances of Stock.

Listing 5.12. Adding methods to Integer using categories
class StockCategory {
  static Stock getGOOG(Integer self) {
    new Stock(new Order("GOOG", self))

  static Stock getIBM(Integer self) {
    new Stock(new Order("IBM", self))

  static Stock getMSOFT(Integer self) {
    new Stock(new Order("MSOFT", self))

You can see that 200.IBM gives us an instance of Stock using StockCategory, which is defined in the listing. On this instance of Stock, we invoke the method shares. This method takes a closure that contains the rest of the order details as an argument. When we defined the Stock class, we set the delegate of the closure that shares takes to the order instance. Doing so sets up the correct context when we specify limit-Price, allOrNone, and valueAs in the script in listing 5.11. Note that in real-life projects we can generate this code from the list of stocks in the database.

Now we’ve got the basic engine of the DSL ready. We need to add one last category to make the script smarter, then finish it off with a Java launcher.

5.3.3. Rounding it off

Look at listing 5.11 once again. Processing for each individual order starts with buy. This means that we need to inject a method buy to the Groovy Script class. Let’s do this using another category:

class OrderCategory {
  static void buy(Script self, Order o) {
    println "Buy: $o"

  static void sell(Script self, Order o) {
    println "Sell: $o"

For this demonstration, we just want to print the order that the user has entered and check that all its attributes are set correctly. In real-life projects, you should be writing meaningful domain logic that processes the order and does other things.

This step completes the DSL implementation in Groovy. All we need to do now is write a runner that runs this DSL, and then invoke the runner code from within a Java application. Here’s the Groovy code that runs the DSL using the categories that we defined above:

In this snippet, note that the additional methods that we inject into the existing classes are available only within the scope denoted by the use {} block . Finally, here’s the Java application that invokes DslRunner:

public class LaunchFromJava {
  public static void main(String[] args) {

Ta-da! You’ve just seen how a DSL implementation evolves in Groovy. Figure 5.9 depicts the translation of the DSL script through the semantic model to the execution phase.

Figure 5.9. How the Groovy DSL script gets transformed into the Semantic model and finally into the Execution model

This concludes our miniseries of DSL implementation in Groovy. In the next section, we’ll implement a completely different flavor of DSL using the power of Clojure.

5.4. Thinking differently in Clojure

In this section, you’re going to see how you can implement a use case for computing the cash value of a trade in Clojure. (If you need a reminder, section 4.2.2 contains a sidebar that discusses what I mean by the cash value of a trade.) We’ll use a DSL-based approach, building smaller domain abstractions bottom up and then composing them using Clojure combinators.

We implemented this same use case in section 5.2.4 using Ruby. So why are we dealing with it again? Ruby is a language that offers a completely different paradigm than Clojure. Ruby is OO and uses runtime metaprogramming as the primary tool for DSL implementation. Clojure is mostly functional, with strong compile-time metaprogramming capabilities using macros. It’s no surprise you need to think differently in Clojure than in Ruby or Groovy. Even when you implement a DSL for the same use case, a Clojure-based implementation might be entirely different from a Ruby-based one. Here I’ve intentionally picked the same use case we used for Ruby just to demonstrate how selecting another host language can influence design decisions differently. Look at table 5.3 for some of the key differentiators in Clojure that stand out with respect to Ruby. For a more detailed discussion of Clojure as a language, you can see [6] in section 5.7.

Table 5.3. Think differently when you’re implementing a DSL in Clojure

DSL implementation in Ruby

DSL implementation in Clojure

Think in terms of objects and modules and how to wire them up during runtime using the power of metaprogramming. Think in terms of the functions of the use case and how to compose them using Clojure sequences that operate on lambdas.
Use tricks like method_missing, const_missing, and other dynamic metaprogramming features to make the DSL concise and expressive. Use macros to convert DSL syntax to normal Clojure forms—all during compile time.
A DSL implemented in Ruby or Groovy might not feel like the native syntax of the language. A DSL implemented in Clojure looks like Clojure code because its structure is based on s-expressions.

To fully appreciate the differences between the two implementations, I strongly advise you to go back and reread the information about the Ruby implementation before I take you through the Clojure one.

5.4.1. Building a domain object

To start building a DSL, we need some of the underlying abstractions that form the core of the domain model. For this reason, our first step is to design the trade abstraction and define a factory method (shown in listing 5.13) that generates trade objects from an external source.



A Factory method is a design pattern that provides a single point of interaction for the creation of instances of a family of objects.



Clojure tidbits you need to know

  • Basic function definition and syntax of the language. The syntax of Clojure is like Lisp; the prefix notation might catch you off guard. In case you’re not used to it, go through the basics by reviewing [4] in section 5.7.
  • Defining a Map data structure. Map is a data structure that’s used often in Clojure to implement the class-like structures of OO programming.


The source can be any data source that your system interacts with; for example, the attributes can come from a web request, flat file, or database. The factory method extracts information from the request and builds a map that represents the attributes of a trade.

Listing 5.13. Trade generation in Clojure

Clojure is implemented on top of objects, though it presents a functional model of programming to users. In this example, we implement abstractions as name-value pairs in the form of a Clojure Map. Note that trade is a function that builds up the necessary abstraction with the relevant information from the input request. The input request is also a Map, implemented as a function of its keys. When we extract values out of the Map, we use the same syntax as we do for a function invocation . As an example, the literal syntax (:account request) extracts the value of the account key from the Map.

The method trade clearly expresses the domain intents and semantics. The map literal syntax enables named arguments, which map domain concepts directly into program elements and makes the code expressive. The map tax-fees is still a placeholder that we need to fill up when we enrich the generated trade in the next section.

5.4.2. Enriching domain objects using decorators

The next step is to enrich the base abstraction with additional features that make it usable in a real-world use case of a trading lifecycle. We’re going to use decorators to do this, the same way we did with the Ruby implementation in section 5.2.4 to enrich the trade with tax and fee components. But unlike the Ruby implementation, we’ll use compile-time metaprogramming and macros to implement the same behavior in Clojure.


Designing a DSL involves mapping the syntax that you want to the underlying semantics of the language. You need to change the way you think when you use a different language for implementation.



Clojure tidbits you need to know

  • Higher-order functions. Clojure supports higher-order functions where you can use functions as first-class values. You can pass functions as parameters, accept one as a return value, and so on.
  • Macros are the most important secret sauce for developing DSLs in Clojure. Macros are the building blocks of compile-time metaprogramming.
  • Let binding and lexical scope. You can define bindings at precisely the scope you need, no matter how narrow it is.
  • Understanding the Clojure standard library functions. A wealth of them are available at the Clojure site (
  • Immutable data structures. Clojure offers immutable and persistent data structures. By persistence I mean that you have access to all earlier versions, even after mutating a data structure. Look at [4] in section 5.7 for details.
  • Some standard combinators like reduce and ->. Combinators let you write concise and expressive code structures in Clojure. Combinators are functions that take other functions as parameters.


But how do you add behaviors to an abstraction dynamically without adding any runtime performance overhead? Clojure lets you do that using compile-time mixins. Let’s see how.

Using Clojure combinators

Suppose we have the construct with-tax-fee that introduces additional behaviors within an already existing Clojure function to add tax and fees to our trade. In the following snippet, if we apply with-tax-fee to our trade function, we get a new function that has the additional mappings for :tax and :commission stacked on top of the existing set.

(with-tax-fee trade
 (with-values :tax 12)
 (with-values :commission 23))

In this snippet, with-tax-fee acts as the decorator to the trade function. Now you can execute trade with a request and tax and commission components will be filled up with 12% and 23% of the principal amount, respectively. (Tax and commission are usually expressed as percentages of the principal amount of a trade.)

If you’re not the implementer of the DSL, you’re not really bothered about what it takes to implement constructs like with-tax-fee or with-values. You can use them as combinators and develop your abstractions for the trade DSL. But in this section, we’re discussing DSL implementations. So our next step is to see what it takes to implement a function that decorates another function with an additional behavior. Here’s an implementation of with-values.

Listing 5.14. Wrap trade with additional behavior

The combinator with-values does quite a bit to augment the output of the trade function with additional behavior. Even though this isn’t a book on Clojure, let’s look into this code more closely in table 5.4 to get an idea of how it abstracts the complexity to give clients a simpler API.

Table 5.4. Dissecting a Clojure API

Clojure feature

How the DSL uses it

Higher-order functions that are an essential part of the recipe that you’ll use when implementing DSLs. The first argument that with-values takes is a function . The with-values function returns another function , which is also characteristic of a language that supports functions as first-class values. Because Clojure supports higher-order functions you can pass functions as parameters, get them as return values, and treat them like any other data type in the language. In , fn denotes an anonymous function in Clojure. The anonymous function that with-values returns is the one that augments the input function trade with additional behavior to populate :tax-fees.
Evaluation in a lexical context to control scope We invoke trade on the argument that the new function takes and augment the resultant Map with tax-fee values. The bindings in a let are sequential; note that we use trdval in the next binding for principal.
Immutability and the ability to implement persistent data structures In the last step in the listing, we add tax-fee as the key and the value parameter as its value . Then we add the entry to the Map that trade returned in . The original Map doesn’t get mutated. Clojure implements immutable and persistent data structures. In this case, for every invocation, assoc-in returns a new Map that augments the original Map with the key and value specified as arguments.
Functions that compose naturally The fact that with-values returns a function helps implement chaining. So we can write code like the following: (with-tax-fee trade
(with-values :tax 12)
(with-values :commission 23)) In this code, we chain two invocations of with-values with the original trade function. This chaining of method invocation is what we mean by composability, which is offered by languages like Clojure that implement functions as first-class values.

But how does with-tax-fee integrate with with-values to give us the new trade function? That’s what we’ll turn to next.

Decorators using higher-order functions

Before we look at with-tax-fee, here’s a little something that forms the basis of our decorator implementation. One thing is becoming clearer. Unlike the Ruby implementation, in which we focused on objects, Clojure provides you with interesting tricks to deal more with functions. The whole idea when you’re implementing DSLs is to explore some of the idioms that fit the Clojure landscape more naturally. The following snippet shows an interesting trick you can do with function threading.

(def trade
    (-> trade
        (with-values :tax 20)
        (with-values :commission 30)))

The function -> threads its first argument across the forms that are the subsequent arguments. Function threading makes implementing a decorator trivial in Clojure, because you can redefine the original function by threading it through the decorators using ->. The implementation of a decorator shown in listing 5.15 uses this technique and is taken from Compojure, the web development framework in Clojure (go to for more information). If you’re not familiar with Clojure, the concepts we’ve just been discussing will take some time to gel. But when you get the feel of how you can compose larger functional abstractions out of smaller ones, you’ll appreciate the beauty that those four short lines of code can bring to your implementation.

Wrapping it up with a Clojure macro

Instead of making the user do all this, why not wrap up the function threading stuff with a Clojure macro that reads more simply and intuitively, and at the same time abstract the same magic without any runtime overhead? That’s what’s happening in the following listing.

Listing 5.15. Decorator in Clojure

That bit of code completes with-tax-fee, the Clojure version of a compile-time decorator that lets you add behaviors to existing abstractions in a completely noninvasive manner. with-tax-fee is implemented as a macro , which gets expanded during the macro expansion phase of compilation and generates the code that it encapsulates.

Before decorating the input function, we need to redefine the root binding of the function that’s preserving the metadata. The macro redef does this for us. This process is different from what happens in Ruby, where all metaprogramming is done during the execution phase. As we discussed earlier, during runtime we don’t have any meta-objects in Clojure; they’re all resolved during macro expansion.

We’ve done lots of stuff to our implementation and come up with a DSL that adds tax and fee calculation logic to a trade abstraction. With the decorated trade function, we can now define an API that computes the cash value of the trade. The features of Clojure that you’ve seen so far make this implementation a meaningful abstraction for the domain. The API is explicit about what it does with the trade to compute its net cash value. A person familiar with the domain and the language can understand right away what the function tries to achieve.

This implementation is a testimony to the succinctness of Clojure. Clojure is a dense language and lets you program at a higher level of abstraction. The last expression in this snippet packs a powerful punch. reduce is a combinator that recurses over the sequence and applies the function (+) that’s passed to it .

What we’ve accomplished

Before we move on to the next step in running our DSL, let’s step back for a moment and take stock in table 5.5 of what you’ve achieved so far in implementing the DSL for the cash value calculation of the trade.

Table 5.5. Evolving our DSL

Step in the evolution of the DSL

Implementation details

1 Designed the base abstraction for trade We used a factory method trade that does the following:
  1. Accepts data from an external source
  2. Generates a trade object in the form of a Clojure Map
2 Injected additional behaviors into the domain object. Changed trade function to one with additional behaviors for tax and fee injected for cash value calculation.
Techniques used:
  • Decorator pattern
  • Clojure macros
How to get tax and fee to populate trade:
  1. Define the with-values function that augments the output of the trade function with behaviors.
  2. Add tax and fee to the output of the trade function using the Decorator pattern
  3. Define the with-tax-fee macro that enables the multiple application of with-values on an existing function.
Note: with-tax-fee uses compile-time metaprogramming and has no runtime overhead.
3 Defined the net-value function for cash value calculation of trade The net-value function accepts the trade function that we modified in step 2 and also takes the following actions:
  1. Gets the principal from the trade
  2. Gets the tax and fees from the trade
  3. Computes the net cash value using the specified domain logic

Clojure is a language with a philosophy that’s different than that of Ruby, Groovy, or Java. Clojure is idiomatically functional, despite being developed on top of Java’s object system. You need to think differently when you’re programming in Clojure. As you saw in this section, we didn’t have to do any special magic to design our DSL in Clojure. It’s just Clojure’s natural way of programming.

Before we go any further, let’s look at figure 5.10 which illustrates the lifecycle of a DSL script written in Clojure.

Figure 5.10. DSL script to execution model for Clojure. Pay attention to the series of steps that the DSL script goes through before it’s ready for execution. As we discussed in chapter 1, the semantic model bridges the DSL script and the execution model.

Feeling tired? We still have one last bit of business left with the Clojure DSL—the instant gratification of seeing your DSL in action within a Clojure REPL (read-eval-print-loop). Get yourself a cup of coffee and a few cookies if you need caffeine and sugar to invigorate you. You might need a pick-me-up, because in the next section we’re going to go interactive. You’ll interact directly with the Clojure interpreter and run the DSL that you designed in this section.

5.4.3. A DSL session at the REPL

A dynamic language like Clojure gives you the pleasure of directly interacting with the language runtime through an REPL. (Read about REPL at Using the REPL, you can immediately see your DSL in action, make changes online, and feel the effect of the changed behaviors instantly. You should definitely use this feature for the seamless evolution of your DSL.

For the cash value calculation logic, our DSL looks as simple as (net-value (trade request)), which is as concise and expressive as possible. You can create a trade instantly, run your DSL in the REPL, and make changes to the trade function by adding more domain rules as decorators. Here’s a look at a sample session at the Clojure REPL with the DSL we’ve implemented so far:

user> (def request {:ref-no "r-123", :account "a-123",
                    :instrument "i-123", :unit-price 20,
                    :quantity 100})

user> (trade request)
{:ref-no "r-123", :account "a-123", :instrument "i-123",
  :principal 2000, :tax-fees {}}

user> (with-tax-fee trade
        (with-values :tax 12)
        (with-values :commission 23))

user> (trade request)
{:ref-no "r-123", :account "a-123", :instrument "i-123",
  :principal 2000, :tax-fees {:commission 460, :tax 240}}

user> (with-tax-fee trade
        (with-values :vat 12))

user> (trade request)
{:ref-no "r-123", :account "a-123", :instrument "i-123",
  :principal 2000, :tax-fees {:vat 240, :commission 460, :tax 240}}

user> (net-value (trade request))

One of the most important qualities of a DSL is the ability to hide complex implementations behind simple-to-use APIs that model the domain vocabulary. This session at the Clojure REPL demonstrates this simplicity. DSLs always make you feel like you’re using a language that models the one a trader speaks at his dealing desk. In this case, it happens to have a Clojure implementation underneath.

For every new paradigm, there comes a set of pitfalls that you, as a designer, need to be aware of. So far you’ve seen quite a few patterns, idioms, and best practices that should guide your thought processes while you’re implementing DSLs. In the next section, I’ll talk about some of the pitfalls that you should stay clear of.

5.5. Recommendations to follow

So far this chapter has been a positive experience for you. We’ve discussed DSL implementation in three of the most popular dynamic languages on the JVM. You’ve seen lots of idioms and implementation techniques and actually implemented a couple of useful snippets of DSL from our domain of stock trading applications. But you always have to pay the piper, and no matter how easy all this might seem, there are some potential problems that we’ve got to talk about.

Instead of picking up three completely different examples, I’ve intentionally selected examples for this section that are broadly related to each other. The idea is to highlight the fact that even with the same problem in hand, you’ll need to employ different techniques to solve it, depending on the repertoire of your language. What you can do using dynamic metaprogramming in Ruby might be better solved using a different idiom in Clojure. It’s extremely important to learn to use the right tool for the right job. While you’re figuring out which tool does what, you’ll stumble on the most common pitfalls that might catch you off guard. Let’s discuss some of them from the perspective of DSL development.

5.5.1. Honor the principle of least complexity

When you’re implementing an internal DSL, select the least complex idiom of the host language that best fits in the solution model. You’ll frequently see developers use metaprogramming techniques when they could’ve done the same thing without them. A common example of this in Ruby is the use of monkey patching. (Remember monkey patching? It’s the technique in which you open up a class and make changes to methods and properties. Doing this is particularly dangerous in Ruby because these changes are always applied globally.) In many situations, instead of opening up a class and introducing new methods in it, you can instead define a new Module in Ruby that contains those methods and include the Module in the target class.

5.5.2. Strive for optimal expressivity

If you try too much for the nirvana of expressivity in your DSL, you’ll introduce unwarranted complexity into your implementation. Make the language as expressive as your user requires. The Ruby DSL that we rolled out in section 5.2.2 was expressive enough for a programmer to comprehend the semantics of the domain. Here it is once again as a quick reference: 'T-12435',
  'acc-123', :buy, 100.shares.of('IBM'),
  'unitprice' => 200, 'principal' => 120000, 'tax' => 5000

Expressive enough! But you might be asking, why did we go for the interpreter version of the DSL? For a couple of reasons. First, I wanted to take the DSL to the next level so that it would be acceptable to the Bobs on our team. Bob was the first person who complained about the accidental complexity that our DSL had. And the interpreter version was close to what he would normally say at his trading desk. The second reason was I wanted to demonstrate how far you can stretch the dynamism that Ruby offers. But in real life when you’re designing DSLs, keep in mind the level of expressivity that fits the profile of your user.

5.5.3. Avoid diluting the principles of well-designed abstractions

You’ll often be in situations when you’ll be tempted to make the DSL more verbose, thinking that it will make your language more acceptable to the users. One of the most common impacts of such an attempt is that it violates the principles of well-designed abstractions that we discussed in chapter 1. Introducing bubble words or frills in a language can lead to decreased encapsulation and increased visibility of the internals of your implementation. It can also make your abstractions unnecessarily mutable. Listing 5.5 showed a common example of this trade-off; we made the Instrument abstraction mutable so that we could build a nice DSL around the instrument-creation logic. Look back at listing 5.7 where we exploited this mutability property to make our DSL more expressive.

This is not to say that you should never bother with expressivity. Remember that designing a language is always an exercise of making trade-offs and compromises. Be sure to critically evaluate whatever decision you make and whatever compromises you make in your abstractions. And always keep your design principles aligned with the profile of the user who’ll be using your DSL.

5.5.4. Avoid language cacophony

It’s a common allegation that DSLs don’t compose. A particular DSL is targeted to solve a specific problem of a domain. When you design a DSL for a trading application, you always think in terms of making it expressive with reference to the problem domain that you’re modeling. You really don’t think about how your DSL will integrate with another third-party DSL that does ledger accounting and maintains client portfolios.

Even though you can’t know everything, always try to design abstractions that compose easily. Functions compose more naturally than objects. And if you’re using a language that supports higher-order functions like Ruby, Groovy, or Clojure, always focus on building combinators that can be chained together to form little languages. Check out appendix A, where I discuss the advantages of composable abstractions and their impact on concurrency.

If your abstractions don’t compose, your DSL will feel chaotic to use. Language artifacts will stand lonely and forlorn and will never feel natural to your domain users.

These pitfalls are some of the most common ones that you should be aware of while you’re designing DSLs. It’s extremely important to carefully select the subset of languages that you’re going to use for implementing your DSL. Keep all the integration requirements of your DSL in mind and honor the principles of designing good abstractions.

5.6. Summary

Congratulations! You’ve just reached the end of our discussion about implementing internal DSLs in dynamically typed languages. I chose Ruby, Groovy, and Clojure as the three implementation languages mainly because of the variety that they offer as residents of the JVM.

JRuby is the Java implementation of Ruby that provides a bridge for it to interoperate with the Java object model. It comes with the strength of Ruby metaprogramming and adds to it the power of Java interoperability. Groovy is known as the Java DSL and shares the same object model as Java. Clojure, despite being implemented on top of the Java object model, offers the strong functional programming paradigm of Lisp.

In this chapter, we discussed how you can implement typical, real-life trading application use cases using these three languages. Ruby offers strong metaprogramming capabilities that can make your DSL dynamic at runtime, which enables you to compose and build higher-order abstractions. Groovy offers capabilities during runtime that are similar to those of Ruby, but interoperates with Java more seamlessly because it shares the same object model.

You implemented the final version of our order-processing DSL in Groovy, which we started way back in chapter 2. Through this example, you also got an idea of how a typical DSL evolves through an iterative process of incremental improvement. Clojure is the Lisp that runs on the JVM and comes with the awesome power of compile-time metaprogramming, also known as macros. You saw how to use macros to make a DSL expressive and concise, doing it all without adding any runtime overhead that the metaobject protocol incurs in many other languages.

At the end of the day, if you can always keep in mind the compromises and tradeoffs that you need to do when designing your DSL, you’ll do well. After all, every language design is an exercise in how effectively you can balance your expressivity with the implementation overheads. For a DSL, the primary objective is to make your code fully reveal its intentions, which is the best way to improve the communication path between the developer and the domain expert.


Key takeaways & best practices

  • Be aware of all the metaprogramming tricks available with Ruby when you design an internal DSL. But always remember that metaprogramming has its own costs, with respect to both code complexity and performance metrics.
  • Prefer Groovy categories to ExpandoMetaClass to control the scope of metaprogramming.
  • Monkey patching in Ruby is always tempting, but it operates in the global namespace. Use monkey patching in DSL implementation judiciously.
  • Clojure is a functional language, though it’s implemented on top of Java. Design your DSL around domain functions if you’re using Clojure. Use the power of functional programming through higher-order functions and closures to design the semantic model of your DSL.


Now that you’ve completed this journey along the road of DSL design using the three most popular, most dynamic languages on the JVM, you must’ve developed a familiarity with the basic idioms that support a DSL implementation. Choosing the correct idiom of a given language is the most important aspect of development, one that shapes how expressive your DSL APIs will be. This chapter is a significant step forward for you, giving you a baseline from which to delve more into idiomatic implementation techniques in Ruby, Groovy, and Clojure.

In the next chapter, we’ll look at DSL implementation from the other side of the typing fence. We’ll discuss how static typing helps shape up DSL implementations. You’re also going to complete a fun-filled exercise developing internal DSLs in Scala.

5.7. References

  1. Thomas, Dave, Chad Fowler, and Andy Hunt. 2009. Programming Ruby 1.9: The Pragmatic Programmers’ Guide, Third Edition. The Pragmatic Bookshelf.
  2. Subramaniam, Venkat. 2008. Programming Groovy: Dynamic Productivity for the Java Developer. The Pragmatic Bookshelf.
  3. Perrotta, Paolo. 2010. Metaprogramming Ruby: Program Like the Ruby Pros. The Pragmatic Bookshelf.
  4. Halloway, Stuart. 2009. Programming Clojure. The Pragmatic Bookshelf.
  5. Abelson, Harold, Gerald Jay Sussman and Julie Sussman. 1996. Structure and Interpretation of Computer Programs, Second Edition. The MIT Press.