This example consists of two bidirectional transformations going from (i) an AST parsed from a folder containing files with a list of pairs of words (representing a dictionary), to (ii) a dictionary model with better typing and already some cross-tree references (so really a graph), and finally to (iii) a box with partitions containing cards (also referred to as Leitner's system often used to memorize flashcards).
In the diagram below (click to enlarge), the three models involved are depicted for a simple example.
In the topmost model (the AST), the root folder myLibrary contains two subfolders english and french, each containing files representing the content of a dictionary.
In numbers1-10.dictionary, for example, the parsed tree shows the name of the dictionary, the author, and a series of ENTRY nodes, each with two children. The first child, e.g., vier : four contains a pair of words in two languages, separated by a colon (":"). The second child, e.g., beginner represents the difficulty of the word.
This is depicted here as an AST but can be easily parsed/unparsed to a textual file with a simple template.
The second model is an actual Library model, now with nice explicit types (Library, Shelf, Dictionary, Author), and references.
This example already shows that some references are cross-tree, e.g., different dictionaries can share the same author.
The final model is a so called "learning box" or Leitner's system, used to memorize flashcards.
The idea here is that dictionaries can be transformed into such a box to memorize the entries.
The box consists of a number of partitions with cards, and each time a card is remembered correctly it moves to the next partition. If a card is forgotten it moves all the way back to the very first partition.
When all entries are memorized the box can be emptied and transformed back into a dictionary, which is better for looking up specific words now and then.
Remember that each entry in the dictionary has a "difficulty" level, and this is adjusted while memorizing the flashcards in the box (cards that need to be repeated again and again are more difficult than others).
The dictionary model is more or less a straightforward abstraction of the AST with explicit types and references.
Author nodes with the same email address are all transformed to the same author, who is then shared among dictionaries.
Some conventions (colon separated values, etc.) are used to extract further information and make things explicit in the model.
The connection between dictionaries and corresponding learning boxes is a bit more interesting: The structure of the learning box remains essentially the same (three partitions wired the same way representing three levels of difficulty), and Cards obviously correspond to Entries.
The initial placement of cards in partitions is controlled via a set of attribute conditions.
Properties [optional section]
Variants [optional section]
References [optional section]
This chain of transformations is used as the running example in the eMoflon handbook. Please refer to it for a step-by-step detailed description.
Artefacts [optional section]
A virtual machine hosted on Share is available with a workspace containing all (Ecore) metamodels, the concrete example used above, and two TGGs going from AST to Dictionary to Leitner's System: