Compile Time

The time at which a language processor such as cc or javac translates the source code. Contrast with Run Time, Link Time, Load Time, Expand Time, Read Time, Build Time, Test Time.

Know what your language and tools do or can be asked to do at compile time or other times.

Is the distinction between Compile Time, Link Time, and Build Time really useful when they all happen in a single button click in most decent development environments ?

The Compile Time versus Run Time distinction is part of the definition of macros -- not just C macros, which some avoid altogether in C++, but in e.g. semantic Lisp Macros -- arbitrary Lisp code run at Compile Time (or better, Read Time) rather than at Run Time, and in Html templates. And since some tests are extremely time consuming, it's hard to imagine Test Time ever disappearing altogether.

And from the beginning of computer history, Link Time has not always been a separate phase, but it is nonetheless always an important phase whenever shared libraries are supported. Furthermore there is sometimes a Load Time phase distinct from Link Time.

No matter how fast these different phases are, to a systems implementor they are nonetheless distinct phases, which don't go away simply because the user doesn't notice them! -- Doug Merritt

But they might go away if the systems implementor redesigns the system in a non-traditional way. I've expanded upon the below a bit in explanation.


The distinction between Compile Time and Run Time becomes blurred with Just In Time compilers/interpreters and Trans Meta's "Code Morphing". More generally, it becomes blurred in any language that comes with a full compiler as a standard part of its runtime.

The distinction between Compile Time and Link Time becomes blurred with Aspect Oriented Programming that can potentially invade 'shared libraries' or modules.

The distinction between Compile Time and Test Time can become blurred in languages that support Meta Programming that also offer a mechanism for both Compile Time evaluation and for intentionally creating a Compile Time failure (e.g. through the Type System, or some sort of language-supported assertions mechanism). Of course, this will only handle a certain subset of tests and code-proofs; it is likely that the extremely time-consuming tests, or those that may have large side-effects, are still made distinct.

The distinction between Compile Time and Load Time and Run Time can become blurred in languages and frameworks that allow for an Event Driven Programming process model. If the language and framework are designed to work with one another, merely having a static (on disk) and properly named compiled object could qualify, essentially, as a 'loaded program', complete with access to persistent state and other services, that will spin off a short procedure to handle incoming messages. In fact, intentionally blurring Compile Time, Load Time, and Run Time is listed (indirectly) among the desired New Os Features: treating compiled objects and files and such equally as part of an 'naked' object framework that can receive messages, expose procedures, have value-components, etc. The trick is getting rid of the procedural service loop - the 'main' method - since once that is gone the framework can easily support millions of processes and objects and files, statically, stored on disk when not in immediate use, all on equal grounds. It is likely that a Wiki Ide would need to take this approach, and I know that RK's Blue Abyss was heading for it. It is very likely that this will be the process and application model of the future (undoubtedly combined with some sort of Immediate Mode Gui + CSS when representing processes to the user).

Finally, even the distinction between Build Time and Compile Time can become blurred in languages that support control of automated linking and resource acquisition from within the language (thus avoiding the need for a makefile or third-party-language 'script' to control the build). Traditionally, Build Time is used for resource acquisition and explicit combination (e.g. the makefiles and scripts, multi-language issues, compiling CORBA templates into C++ objects, etc.) If we ever intend the ability to have a One Language Environment (where we can get everything done without leaving the language), compile-time resource acquisition and linking will need to be part of it in addition to obviating the need for third-party support for Meta Programming (i.e. some fairly generic syntax extension must be possible - maybe even semantics extension via some form of Aspect Oriented Programming). Of course, if we have Aspect Oriented Programming, Compile Time and Link Time need to be combined anyway - it wouldn't be a bad idea to go ahead and sweep in Build Time resource-acquisition at the SameTime (or Just In Time), so that the language can handle and metaprogram with arbitrary resources - not just other libraries. The language would be far more flexible for it, and programmers wouldn't need to work with multiple languages just to get things done.

If one blurs Build Time, Compile Time, Link Time, Load Time, and Run Time into BCLLRTime, then the only distinctions left are Code Time, BCLLRTime, and Test Time. For the end-user (the programmer) it may be beneficial for the framework to blur Code Time and BCLLRTime via some sort of automated compile (e.g. much like the Wiki, where the 'source' for the page can be edited, and the result can be seen immediately). And however much of Test Time can be blurred with BCLLRTime without harm wouldn't be bad at all - but it will, even as far into the future as this eye can see, still be quite reasonable to have the 'object' expose a test procedure (or port accepting a test command) or to use one program to test another for the 'big' tests.

Things become a lot simpler for the language and framework users if BCLLRTime is blurred (even if it is blurred more by the framework than by the language) - not just for the programmers, but also any services the programmers choose to render as programs-creating-programs (objects creating objects). It's like the difference between a Wiki where one can immediately see what one edits, and one where a third party 'moderator' must come in, approve, and press a button to compile the edited page into HTML before using it. With this blurred time, one ends up with systems like Qed Wiki where simply writing up a page and drag-dropping components together can essentially create a new 'built', 'compiled', 'linked', 'loaded', and 'running' service or application that is there and available (and immediately testable) even after you close the browser and walk away. The idea for Wiki Ide embraces this possibility, since that's what Wikis are all about - providing a service immediately, upon pressing one button - like 'save'.


Regardless of the translation technology involved (traditional Ahead Of Time compilation, JIT compilation, compilation to Byte Code which is then executed on a Virtual Machine, interpreted text, etc.) there are still several useful thresholds to be crossed. Starting with the latest and working backwards:

Before/after the program starts running. In the days of C, this was easy to determine--a program was officially running when main() was called; neither the programmer nor the user had little control over what happened before that. In more modern languages, lots of things happen before main()--static constructors in Cee Plus Plus and Java, class initializers in Java, automatic startup of multiple threads in concurrent programs, etc. Many of these happen in an indeterminate order as well. A more useful definition for "running", then, might be this: A program is "running" when it starts processing and reacting to user input (whether reading from a file or the console; examining environment variables, command-line arguments, or registry entries; or reacting to mouse or other controller events).

Before/after the program is loaded into memory. Again, this is not so clear-cut in modern environments. In the past, one had to mainly worry about overlays. Today, overlays are no longer used in most environments (thankfully), but one must worry about dynamically-loaded libraries and components (sometimes loaded under control of an already-runnign program; sometimes loaded automatically by the operating system) and the like--to say nothing about modern virtual-memory architectures which load in program pages on demand, and programming languages which are capable of compiling code (from text!) on the fly. But this threshold is still important because before it occurs; the program (and/or the tools that built the program) can make little or no assumptions about the environment it is run under; after loading starts the program (or the language/system runtime) can start to discover the environment and react accordingly.


It's an interesting question in the context of the GNU LGPL (www.gnu.org ), which is very C-oriented, because the LGPL refers extensively to the differences between compile time, link time, and run time in determining what is or is not a derived work. Is it possible that the LGPL makes no sense for languages where the distinction is not as clear as it is in C/C++? --Steven Newton

Common Lisp makes distinctions between read time, compile time, load time and run time but there are enough differences between CL and C/C++ that the LGPL is not recommended. The Lisp Lesser General Public Licence (www.cliki.net ) is similiar in spirit to the LGPL but defines what is and isn't a derivative work of a CL library. -- Thomas Atkins

It should be noted that many of the Free Software Foundations claims regarding what constitutes a derivative work, are untested. Quite a few attorneys think that dynamically linking to a library is insufficient to create a derived work. There is a long-standing debate on this topic.


See original on c2.com