<div dir="ltr">Hi everybody,<br> I am trying to port (a
part of) the parsec library to scheme. I have read the paper and the
haskell source of parsec, but I still have a problem. May be the
problem is from my lack of parser knowledge, but I have to bother you.<br>
Scheme does not support lazy evaluation by default, so I simulated
laziness when implementing finite lookahead. The result was not what I
expected: it slowed down the parser and ate more memory, not speeding
it up and reducing its space leak. If finite lookahead is a real gain,
it would beat the cost of simulating lazy evaluation, which is not
much. After analysis my code, I drew a conclusion (which may be wrong)
that if the grammar you write is LL(1), that is, you don't use the
"try" combinator, the time complexity of the predictive parser and the
backtracking parser will be the same when they both suceed. Because in
an LL(1) grammar, any consumed errors will make the whole parser return
errors. Since the behaviors of the predictive parser and the
backtracking parser are the same on LL(1) grammars, the backtracking
parser will only eat more memory than the predictive parser when it
fails. Is my conclusion right? Does parsec perform better than
backtracking parsers on a pure LL(1) grammar when it suceeds? I am very
grateful for your answer, thanks.<br></div>