Author: | Andreas Rumpf |
---|---|
Version: | 0.9.2 |
"Abstraction is layering ignorance on top of reality." -- unknown
Directory structure
The Nimrod project's directory structure is:
Path | Purpose |
---|---|
bin | generated binary files |
build | generated C code for the installation |
compiler | the Nimrod compiler itself; note that this code has been translated from a bootstrapping version written in Pascal, so the code is not a poster child of good Nimrod code |
config | configuration files for Nimrod |
dist | additional packages for the distribution |
doc | the documentation; it is a bunch of reStructuredText files |
lib | the Nimrod library |
web | website of Nimrod; generated by nimweb from the *.txt and *.tmpl files |
Bootstrapping the compiler
As of version 0.8.5 the compiler is maintained in Nimrod. (The first versions have been implemented in Object Pascal.) The Python-based build system has been rewritten in Nimrod too.
Compiling the compiler is a simple matter of running:
nimrod c koch.nim ./koch boot
For a release version use:
nimrod c koch.nim ./koch boot -d:release
The koch program is Nimrod's maintenance script. It is a replacement for make and shell scripting with the advantage that it is much more portable.
Coding Guidelines
- Use CamelCase, not underscored_identifiers.
- Indent with two spaces.
- Max line length is 80 characters.
- Provide spaces around binary operators if that enhances readability.
- Use a space after a colon, but not before it.
- Start types with a capital T, unless they are pointers which start with P.
Porting to new platforms
Porting Nimrod to a new architecture is pretty easy, since C is the most portable programming language (within certain limits) and Nimrod generates C code, porting the code generator is not necessary.
POSIX-compliant systems on conventional hardware are usually pretty easy to port: Add the platform to platform (if it is not already listed there), check that the OS, System modules work and recompile Nimrod.
The only case where things aren't as easy is when the garbage collector needs some assembler tweaking to work. The standard version of the GC uses C's setjmp function to store all registers on the hardware stack. It may be necessary that the new platform needs to replace this generic code by some assembler code.
Runtime type information
Runtime type information (RTTI) is needed for several aspects of the Nimrod programming language:
- Garbage collection
- The most important reason for RTTI. Generating traversal procedures produces bigger code and is likely to be slower on modern hardware as dynamic procedure binding is hard to predict.
- Complex assignments
- Sequences and strings are implemented as pointers to resizeable buffers, but Nimrod requires copying for assignments. Apart from RTTI the compiler could generate copy procedures for any type that needs one. However, this would make the code bigger and the RTTI is likely already there for the GC.
We already know the type information as a graph in the compiler. Thus we need to serialize this graph as RTTI for C code generation. Look at the file lib/system/hti.nim for more information.
The compiler's architecture
Nimrod uses the classic compiler architecture: A lexer/scanner feds tokens to a parser. The parser builds a syntax tree that is used by the code generator. This syntax tree is the interface between the parser and the code generator. It is essential to understand most of the compiler's code.
In order to compile Nimrod correctly, type-checking has to be separated from parsing. Otherwise generics cannot work.
Short description of Nimrod's modules
Module | Description |
---|---|
nimrod | main module: parses the command line and calls main.MainCommand |
main | implements the top-level command dispatching |
nimconf | implements the config file reader |
syntaxes | dispatcher for the different parsers and filters |
filter_tmpl | standard template filter (#! stdtempl) |
lexbase | buffer handling of the lexical analyser |
lexer | lexical analyser |
parser | Nimrod's parser |
renderer | Nimrod code renderer (AST back to its textual form) |
options | contains global and local compiler options |
ast | type definitions of the abstract syntax tree (AST) and node constructors |
astalgo | algorithms for containers of AST nodes; converting the AST to YAML; the symbol table |
passes | implement the passes manager for passes over the AST |
trees | some algorithms for nodes; this module is less important |
types | module for traversing type graphs; also contain several helpers for dealing with types |
sigmatch | contains the matching algorithm that is used for proc calls |
semexprs | contains the semantic checking phase for expressions |
semstmts | contains the semantic checking phase for statements |
semtypes | contains the semantic checking phase for types |
seminst | instantiation of generic procs and types |
semfold | contains code to deal with constant folding |
semthreads | deep program analysis for threads |
evals | contains an AST interpreter for compile time evaluation |
pragmas | semantic checking of pragmas |
idents | implements a general mapping from identifiers to an internal representation (PIdent) that is used so that a simple id-comparison suffices to say whether two Nimrod identifiers are equivalent |
ropes | implements long strings represented as trees for lazy evaluation; used mainly by the code generators |
transf | transformations on the AST that need to be done before code generation |
cgen | main file of the C code generator |
ccgutils | contains helpers for the C code generator |
ccgtypes | the generator for C types |
ccgstmts | the generator for statements |
ccgexprs | the generator for expressions |
extccomp | this module calls the C compiler and linker; interesting if you want to add support for a new C compiler |
The syntax tree
The syntax tree consists of nodes which may have an arbitrary number of children. Types and symbols are represented by other nodes, because they may contain cycles. The AST changes its shape after semantic checking. This is needed to make life easier for the code generators. See the "ast" module for the type definitions. The macros module contains many examples how the AST represents each syntactic structure.
How the RTL is compiled
The system module contains the part of the RTL which needs support by compiler magic (and the stuff that needs to be in it because the spec says so). The C code generator generates the C code for it just like any other module. However, calls to some procedures like addInt are inserted by the CCG. Therefore the module magicsys contains a table (compilerprocs) with all symbols that are marked as compilerproc. compilerprocs are needed by the code generator. A magic proc is not the same as a compilerproc: A magic is a proc that needs compiler magic for its semantic checking, a compilerproc is a proc that is used by the code generator.
Compilation cache
The implementation of the compilation cache is tricky: There are lots of issues to be solved for the front- and backend. In the following sections global means shared between modules or property of the whole program.
Frontend issues
Methods and type converters
Nimrod contains language features that are global. The best example for that are multi methods: Introducing a new method with the same name and some compatible object parameter means that the method's dispatcher needs to take the new method into account. So the dispatching logic is only completely known after the whole program has been translated!
Other features that are implicitly triggered cause problems for modularity too. Type converters fall into this category:
# module A converter toBool(x: int): bool = result = x != 0
# module B import A if 1: echo "ugly, but should work"
If in the above example module B is re-compiled, but A is not then B needs to be aware of toBool even though toBool is not referenced in B explicitely.
Both the multi method and the type converter problems are solved by storing them in special sections in the ROD file that are loaded unconditionally when the ROD file is read.
Generics
If we generate an instance of a generic, we'd like to re-use that instance if possible across module boundaries. However, this is not possible if the compilation cache is enabled. So we give up then and use the caching of generics only per module, not per project. This means that --symbolFiles:on hurts a bit for efficiency. A better solution would be to persist the instantiations in a global cache per project. This might be implemented in later versions.
Backend issues
- Init procs must not be "forgotten" to be called.
- Files must not be "forgotten" to be linked.
- Anything that is contained in nim__dat.c is shared between modules implicitely.
- Method dispatchers are global.
- DLL loading via dlsym is global.
- Emulated thread vars are global.
However the biggest problem is that dead code elimination breaks modularity! To see why, consider this scenario: The module G (for example the huge Gtk2 module...) is compiled with dead code elimination turned on. So none of G's procs is generated at all.
Then module B is compiled that requires G.P1. Ok, no problem, G.P1 is loaded from the symbol file and G.c now contains G.P1.
Then module A (that depends onto B and G) is compiled and B and G are left unchanged. A requires G.P2.
So now G.c MUST contain both P1 and P2, but we haven't even loaded P1 from the symbol file, nor do we want to because we then quickly would restore large parts of the whole program. But we also don't want to store P1 in B.c because that would mean to store every symbol where it is referred from which ultimately means the main module and putting everything in a single C file.
There is however another solution: The old file G.c containing P1 is merged with the new file G.c containing P2. This is the solution that is implemented in the C code generator (have a look at the ccgmerge module). The merging may lead to cruft (aka dead code) in generated C code which can only be removed by recompiling a project with the compilation cache turned off. Nevertheless the merge solution is way superior to the cheap solution "turn off dead code elimination if the compilation cache is turned on".
Debugging Nimrod's memory management
The following paragraphs are mostly a reminder for myself. Things to keep in mind:
- Segmentation faults can have multiple reasons: One that is frequently forgotten is that stack overflow can trigger one!
- If an assertion in Nimrod's memory manager or GC fails, the stack trace keeps allocating memory! Thus a stack overflow may happen, hiding the real issue.
- What seem to be C code generation problems is often a bug resulting from not producing prototypes, so that some types default to cint. Testing without the -w option helps!
The Garbage Collector
Introduction
I use the term cell here to refer to everything that is traced (sequences, refs, strings). This section describes how the new GC works.
The basic algorithm is Deferrent Reference Counting with cycle detection. References on the stack are not counted for better performance and easier C code generation.
Each cell has a header consisting of a RC and a pointer to its type descriptor. However the program does not know about these, so they are placed at negative offsets. In the GC code the type PCell denotes a pointer decremented by the right offset, so that the header can be accessed easily. It is extremely important that pointer is not confused with a PCell as this would lead to a memory corruption.
The CellSet data structure
The GC depends on an extremely efficient datastructure for storing a set of pointers - this is called a TCellSet in the source code. Inserting, deleting and searching are done in constant time. However, modifying a TCellSet during traversation leads to undefined behaviour.
type TCellSet # hidden proc CellSetInit(s: var TCellSet) # initialize a new set proc CellSetDeinit(s: var TCellSet) # empty the set and free its memory proc incl(s: var TCellSet, elem: PCell) # include an element proc excl(s: var TCellSet, elem: PCell) # exclude an element proc `in`(elem: PCell, s: TCellSet): bool # tests membership iterator elements(s: TCellSet): (elem: PCell)
All the operations have to perform efficiently. Because a Cellset can become huge a hash table alone is not suitable for this.
We use a mixture of bitset and hash table for this. The hash table maps pages to a page descriptor. The page descriptor contains a bit for any possible cell address within this page. So including a cell is done as follows:
- Find the page descriptor for the page the cell belongs to.
- Set the appropriate bit in the page descriptor indicating that the cell points to the start of a memory block.
Removing a cell is analogous - the bit has to be set to zero. Single page descriptors are never deleted from the hash table. This is not needed as the data structures needs to be rebuilt periodically anyway.
Complete traversal is done in this way:
for each page decriptor d: for each bit in d: if bit == 1: traverse the pointer belonging to this bit
Further complications
In Nimrod the compiler cannot always know if a reference is stored on the stack or not. This is caused by var parameters. Consider this example:
proc setRef(r: var ref TNode) = new(r) proc usage = var r: ref TNode setRef(r) # here we should not update the reference counts, because # r is on the stack setRef(r.left) # here we should update the refcounts!
We have to decide at runtime whether the reference is on the stack or not. The generated code looks roughly like this:
void setref(TNode** ref) { unsureAsgnRef(ref, newObj(TNode_TI, sizeof(TNode))) } void usage(void) { setRef(&r) setRef(&r->left) }
Note that for systems with a continous stack (which most systems have) the check whether the ref is on the stack is very cheap (only two comparisons).
Code generation for closures
Code generation for closures is implemented by lambda lifting.
Design
A closure proc var can call ordinary procs of the default Nimrod calling convention. But not the other way round! A closure is implemented as a tuple[prc, env]. env can be nil implying a call without a closure. This means that a call through a closure generates an if but the interoperability is worth the cost of the if. Thunk generation would be possible too, but it's slightly more effort to implement.
Tests with GCC on Amd64 showed that it's really beneficical if the 'environment' pointer is passed as the last argument, not as the first argument.
Proper thunk generation is harder because the proc that is to wrap could stem from a complex expression:
receivesClosure(returnsDefaultCC[i])
A thunk would need to call 'returnsDefaultCC[i]' somehow and that would require an additional closure generation... Ok, not really, but it requires to pass the function to call. So we'd end up with 2 indirect calls instead of one. Another much more severe problem which this solution is that it's not GC-safe to pass a proc pointer around via a generic ref type.
Example code:
proc add(x: int): proc (y: int): int {.closure.} = return proc (y: int): int = return x + y var add2 = add(2) echo add2(5) #OUT 7
This should produce roughly this code:
type PEnv = ref object x: int # data proc anon(y: int, c: PClosure): int = return y + c.x proc add(x: int): tuple[prc, data] = var env: PEnv new env env.x = x result = (anon, env) var add2 = add(2) let tmp = if add2.data == nil: add2.prc(5) else: add2.prc(5, add2.data) echo tmp
Beware of nesting:
proc add(x: int): proc (y: int): proc (z: int): int {.closure.} {.closure.} = return lamba (y: int): proc (z: int): int {.closure.} = return lambda (z: int): int = return x + y + z var add24 = add(2)(4) echo add24(5) #OUT 11
This should produce roughly this code:
type PEnvX = ref object x: int # data PEnvY = ref object y: int ex: PEnvX proc lambdaZ(z: int, ey: PEnvY): int = return ey.ex.x + ey.y + z proc lambdaY(y: int, ex: PEnvX): tuple[prc, data: PEnvY] = var ey: PEnvY new ey ey.y = y ey.ex = ex result = (lambdaZ, ey) proc add(x: int): tuple[prc, data: PEnvX] = var ex: PEnvX ex.x = x result = (labmdaY, ex) var tmp = add(2) var tmp2 = tmp.fn(4, tmp.data) var add24 = tmp2.fn(4, tmp2.data) echo add24(5)
We could get rid of nesting environments by always inlining inner anon procs. More useful is escape analysis and stack allocation of the environment, however.
Alternative
Process the closure of all inner procs in one pass and accumulate the environments. This is however not always possible.
Accumulator
proc GetAccumulator(start: int): proc (): int {.closure} = var i = start return lambda: int = inc i return i proc p = var delta = 7 proc accumulator(start: int): proc(): int = var x = start-1 result = proc (): int = x = x + delta inc delta return x var a = accumulator(3) var b = accumulator(4) echo a() + b()
Internals
Lambda lifting is implemented as part of the transf pass. The transf pass generates code to setup the environment and to pass it around. However, this pass does not change the types! So we have some kind of mismatch here; on the one hand the proc expression becomes an explicit tuple, on the other hand the tyProc(ccClosure) type is not changed. For C code generation it's also important the hidden formal param is void* and not something more specialized. However the more specialized env type needs to passed to the backend somehow. We deal with this by modifying s.ast[paramPos] to contain the formal hidden parameter, but not s.typ!