Computational Resources and Efficiency in Lisp

Summary: In good Lisp implementations, the use of computational resources such as CPU time and memory are typically comparable or better than for other languages. Comparisons of various implementations are periodically posted in newsgroups. Some factors effecting application performance are discussed here.
up
prev next

search
site map
navigate
about

<*LISP*>
<*learning*>
<*applications*>
<*tools*>
<*community*>
<*references*>
<*systems*>
<*related*>

top-level
syntax
evaluation
environment
objects
dynamic
typing
memory
macros
performance
3GL
history
compare
combine

Disk Space

Lisp implementations, especially for Common Lisp, usually contain a compiler, an extensive library, and program development tools. This is often measured in the tens of megabytes. However, because this is integrated within a single program, these often take up far less space on disk than the sum of corresponding pieces for other languages.

Comparing the size of completed applications is more difficult. Some Lisp implementations require that a completed application be run from within the development system. Others allow the creation of stand-alone executables which may contain varying amounts of the development system and libraries.

In both cases, files might be shared by several applications. A development system executable may be directly shared by several applications. Each application is defined as a set of files that can be loaded into the development system. Operating-system-specific shared-library techniques may also be available so that the common library elements may be held in a single file on disk. In this case the various executables contain only references to the common library, not copies of the library itself.

Many Lisp dialects allows one to create and execute arbitrary functions at run time, making it difficult for the system to determine what may or may not be used while running an application. Some implementations provide libraries that can be specified by the user or deliberately left out of an application, as for other languages. Other systems provide a guided, semi-automatic "tree-shaker" that removes those elements that are determined to be unnecessary.

Memory

Early Lisp systems used far less than one megabyte of memory. Modern systems with integrated development environment may use several tens of megabytes with some applications using hundreds of megabytes. The application building options mentioned above can often deliver executables with small memory usage. In fact, Lisp is often used in embedded systems such as robotics.

In addition to the quantity of memory used, the arrangement and access performance of memory is important. This is a function of Lisp memory management.

Lisp applications traditionally have a relatively higher percentage of heap allocated data vs. the usually higher speed stack allocated data. This is the cost of avoiding dangerous and hard-to-debug memory leaks, and other modern languages are now moving in this direction as well. Some Lisp compilers can keep more data on the stack when the data can be shown or declared to have dynamic extent -- i.e. is not accessible after the defining function returns.

Regardless of where the memory comes from, each simple Lisp datum often carries an additional word of memory compared to other languages. This is used for type and/or memory management purposes. In many systems, this is lessened by clever implementation techniques such as pointer mangling or by using immediate data when it fits in one machine word. When type information is available to the compiler, machine primitive data is often used directly. In the case of larger structures such as are used in object oriented programming, the difference vs. other object oriented language implementations is either non-existent or negligible.

Finally, many Lisp implementations provide some way to use type information at compile time to avoid needing to determine the type of a datum by inspection at run time. Nonetheless, many Lisp programs will need to access type information at run time. This operation may involve additional, relatively expensive memory accesses.

Processor Resources: Speed

Some of the worlds fastest supercomputers are programmed in Lisp. For the rest of us, the performance of Lisp applications is generally comparable to that of other languages when doing the same work. There are several factors that can effect this.

Library

One factor in application performance is the quality of the library utilities used. The availability of such a large range of well integrated, complete library utilities as part of the Common Lisp language standard encourages users to use system-provided operations rather than reinventing them as needed for each application. This makes the speed of the library utilities even more important in Lisp than in some other languages.

The maturity of Lisp and its many implementations, and the tradition of code sharing within the community ensures that most libraries are well optimized. However, it is sometimes the case that a purpose-built utility written with knowledge of the constraints of a particular application can do better. On the other hand, good Lisp compilers supporting code inlining and declarations can often eliminate the need for purpose-built utilities.

Compiler

The consistency and simplicity of Lisp allows good compilers to be written for it. In some cases, "closed compilations" using inlining, partial evaluation and other techniques can be performed on complete applications that produce code far more efficient than would otherwise be possible.

In addition, Common Lisp allows a rich specification of declarations that can be used by the compiler to generate efficient code in the more usual "incremental compilation" and "single file-based compilation" environment.

Language Constructs

By far, the most important factor in application performance is often the complexity of the algorithms used in the application code. By directly supporting a rich, integrated variety of programming styles and data structures, Lisp programmers make it easy to program using appropriate algorithms.

If it is easier and faster to program arbitrary applications in Lisp than in some other languages, then, too, it is easier to program inefficient programs in Lisp. The language makes it easy to use many different kinds of data structures and algorithms, including ones that may be inappropriate for the problem at hand. Unfortunately, some Lisp texts and students add to the problem by concentrating on data structures and techniques such as lists and recursion that may be difficult or impossible to use in other languages. The student then uses these all the time, even when they are not appropriate.

In many programming language texts, examples of bad programming style are often given as counterexamples. The counterexamples are shown in order to illustrate how something important is obscured from the reader. As a general rule, the counterexamples in Lisp books obscure the implementation details while the counterexamples in texts for other languages obscure the programmers intent. There is a lesson in this:

In Lisp, it is easy to obscure implementation details while making the programmer's intent clear. This can lead to productive, clear, easily maintained code, which may, sometimes, be inefficient.

In other languages, it is easy to obscure the programmer's intent while making the implementation details clear. This can lead to efficient code which may, sometimes, be unclear, unproductive, or even incorrect.