Due to the hardcoded implementation of the scanner's DFAs, the structure of the generated code contains a large number of cascaded if statements, something like:
if (the current character has the code x) goto ...;
else if (the current character has the code y) goto ...;Such a sequence of if statements is generated for each state of each DFA, and one if must be taken for each class of characters that are valid in that state. However, certain characters occur alot more often in usual grammars (e.g. in C, although the character set is the whole ASCII set, a very large percent of a C source file will only contain printable character codes). It is therefore possible to arrange these cascated ifs such that those that are most probabe to be taken are placed at the beginning of each sequence.
Activating this optimization in cppcc is easy: just set the PROFILING_FILE option to the name of file to be used for gathering profile information as the parser runs.
Obtaining an optimized scanner is a two-step process:
First, set the PROFILING_FILE option and re-generate the scanner's sources. Because the profiling data file does not exist, cppcc will figure out that it has to be created, and will generate a scanner that will gather up profiling data as it runs, then dumps it into that file. With this first scanner, a dummy program must be created that just reads typical input (something like ``training the scanner''). The scanner contains a special destructor that will put the gathered data on the disk.
Once this is done, the resulted profile file was created and it will be used by cppcc when re-generating the scanner. So all that's left to do is to run cppcc again on the unmodified30 grammar. This time, the scanner will be generated by taking into account the statistics found in the profiling file.
As of yet, I didn't obtain any astonishing speedups. I have a few exaplanations for this (one might be that most processors do this sort of optimization anyway - it is essentially just another form of jump prediction). I am relying on you folks to contribute some statistical data on this - or some valid explanation of why the whole idea is worthless :-)