Online Encyclopedia Search Tool

Your Online Encyclopedia

 

Online Encylopedia and Dictionary Research Site

Online Encyclopedia Free Search Online Encyclopedia Search    Online Encyclopedia Browse    welcome to our free dictionary for your research of every kind

Online Encyclopedia



Compiler

A compiler is a computer program that translates a computer program written in one computer language (called the source language) into a program written in another computer language (called the output or the target language).

Contents

Introduction and history

Most compilers translate source code written in a high level language to object code or machine language that may be directly executed by a computer or a virtual machine. However, translation from a low level language to a high level one is also possible; this is normally known as a decompiler if it is reconstructing a high level language which (could have) generated the low level language. Compilers also exist which translate from one high level language to another, or sometimes to an intermediate language that still needs further processing; these are sometimes known as cascader s.

Typical compilers output so-called objects that basically contain machine code augmented by information about the name and location of entry points and external calls (to functions not contained in the object). A set of object files, which need not have all come from a single compiler provided that the compilers used share a common output format, may then be linked together to create the final executable which can be run directly by a user.

Several experimental compilers were developed in the 1950s, but the FORTRAN team led by John Backus at IBM is generally credited as having introduced the first complete compiler, in 1957. COBOL was also an early language to get a compiler. The idea quickly caught on, and most of the principles of compiler design were developed during the 1960s. During the 1990s a large number of free compilers and compiler development tools have been developed for all kinds of languages, both as part of the GNU project and other open-source initiatives. Some of them are considered to be of high quality and their free source code makes a nice read for anyone interested in modern compiler concepts.

Types of compilers

A compiler may produce code intended to run on the same type of computer and operating system ("platform") as the compiler itself runs on. This is sometimes called a native-code compiler. Alternatively, it might produce code designed to run on a different platform. This is known as a cross compiler. Cross compilers are very useful when bringing up a new hardware platform for the first time (see bootstrapping). A "source to source compiler" is a type of compiler that takes a high level language as its input and outputs a high level language. For example, an automatic parallelizing compiler will frequently take in a high level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements).

  • One-pass compiler , like early compilers for Pascal
    • The compilation is done in one pass, hence it's really fast.
  • Threaded code compiler (or interpreter), like most implementations of FORTH
    • This kind of compiler can be thought of as a database lookup program. It just replaces given strings in the source with given binary code. The level of this binary code can vary, in fact, some FORTH compilers can compile programs that don't even need an operating system.
  • Incremental compiler, like Common Lisp systems
  • Stage compiler that compiles to assembly language of a theoretical machine, like some Prolog implementations
    • This Prolog machine is also known as the Warren abstract machine (or WAM). Byte-code compilers are also a subtype of this, like you can compile Java or Python (and many more) languages to JVM byte-code.
  • A retargetable compiler is a compiler that can relatively easily be modified to generate code for different CPU architectures. The object code produced by these is frequently of lesser quality than that produced by a compiler developed specifically for a processor. Retargetable compilers are often also cross compilers. GCC is an example of a retargetable compiler.

Compiled vs. interpreted languages

Many people divide higher level programming languages into two categories: compiled languages and interpreted languages. However, in fact most of these languages can be implemented either through compilation or interpretation, the categorisation merely reflecting which method is most commonly used to implement that language. (Some interpreted languages, however, cannot easily be implemented through compilation, especially those which allow self-modifying code.)

Compiler design

In the past, compilers were divided into many passes1 to save space. A pass in this context is a run of the compiler through the source code of the program to be compiled, resulting in the building up of the internal data of the compiler (such as the evolving symbol table and other assisting data). When each pass is finished, the compiler can free the internal data space needed during that pass. This 'multipass' method of compiling was the common compiler technology at the time, but was also due to the small main memories of host computers relative to the source code and data.

Many modern compilers share a common 'two stage' design. The front end translates the source language into an intermediate representation. The second stage is the back end, which works with the internal representation to produce code in the output language. The front end and back end may operate as separate passes, or the front end may call the back end as a subroutine, passing it the intermediate representation.

This approach mitigates complexity separating the concerns of the front end, which typically revolve around language semantics, error checking, and the like, from the concerns of the back end, which concentrates on producing output that is both efficient and correct. It also has the advantage of allowing the use of a single back end for multiple source languages, and similarly allows the use of different back ends for different targets.

Certain languages, due to the design of the language and certain rules placed on the declaration of variables and other objects used, and the predeclaration of executable procedures prior to reference or use, are capable of being compiled in a single pass. The Pascal programming language is well known for this capability, and in fact many Pascal compilers are themselves written in the Pascal language because of the rigid specification of the language and the capability to use a single pass to compile Pascal language programs.

Compiler front end

The compiler front end consists of multiple phases itself, each informed by formal language theory:

  1. Lexical analysis - breaking the source code text into small pieces ('tokens' or 'terminals'), each representing a single atomic unit of the language, for instance a keyword, identifier or symbol names. The token language is typically a regular language, so a finite state automaton constructed from a regular expression can be used to recognize it. This phase is also called lexing or scanning.
  2. Syntax analysis - Identifying syntactic structures of source code. It only focuses on the structure. In other words, it identifies the order of tokens and understand hierarchical structures in code. This phase is also called parsing.
  3. Semantic analysis is to recognize the meaning of program code and start to prepare for output. In that phase, type checking is done and most of compiler errors show up.
  4. Intermediate language generation - an equivalent to the original program is created in an intermediate language.

Compiler back end

While there are applications where only the compiler front end is necessary, such as static language verification tools, a real compiler hands the intermediate representation generated by the front end to the back end, which produces a functional equivalent program in the output language. This is done in multiple steps:

  1. Compiler Analysis - This is the process to gather program information from the intermediate representation of the input source files. Typical analysis are variable define-use and use-define chain , data dependence analysis , alias analysis etc. Accurate analysis is the base for any compiler optimizations. The call graph and control flow graph are usually also built during the analysis phase.
  2. Optimization - the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inlining, dead code elimination, constant propagation, loop transformation , register allocation or even auto parallelization .
  3. Code generation - the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions (see also Sethi-Ullman algorithm).

Notes

  1. A pass has also been known as a parse in some textbooks. The idea is that the source code is parsed by gradual, iterative refinement to produce the completely translated object code at the end of the process. There is, however, some dispute over the general use of parse for all those phases (passes), since some of them, e.g. object code generation, are arguably not regarded to be parsing as such.

References

  • (ISBN 0333217322) by Richard Bornat is an unusually helpful book, being one of the few that adequately explains the recursive generation of machine instructions from a parse-tree. Having learnt his subject in the early days of mainframes and minicomputers, the author has many useful insights that more recent books often fail to convey.

See also

External links




Last updated: 12-22-2004 05:46:30