homeThere are basically 3 answers to the question:
Lisp is really old, and pretty easy to implement as languages go, so various lisps explored most of the
landscape before anyone else did. Lisp was the first to get GC, the first with first-class functions, etc.
(see Paul Graham's list of lisp firsts for a fuller account.)
Sadly, something essentially killed lisp innovation around the time Common Lisp got standardized, so it has
lost some of its lead to a few other languages, most notably Haskell.
To an unusual degree, someone who codes in lisp is on nearly equal footing with the implementer. Usually,
there are whole classes of things that the implementer can do but that usual users cannot. For instance,
in C, it is impossible to define infix operators, so if you want to write 10^3 or 10**3, tough-hack the
compiler if you care enough. User-defined lisp functions are almost equal with base lisp functions, and
with macros, you can even make your own syntactic extensions, so if infix math is necessary, then you can
every language gets parsed, turned into an AST (Abstract Syntax Tree), and then compiled or interpreted.
Usually, a very large amount of work goes into the parsing step (Perl and C++ are probably the best examples
of this-both are basically unparseable.) The actual results, modulo implementation bugs, are determined by
the meaning of the AST in whatever semantics the language embodies. This means that the programmer needs to
understand the syntax and semantics of at least a subset of their language to write code. Lisp syntax is
trivial- stuff between parens is a list, first element of a list is a function, the rest is arguments; and
its semantics are fairly simple, in many implementations. (note-this is more true of scheme than of Common Lisp).
That means that, even if everything else was equal, lisp would win a bit, because it would be so easy to parse
and would not require you to remember much.