Fix #2333 - Make sure lexer skip whitespace on non-token

Comments and multi-line comments produces no token per-se during
lexing, so the lexer loops to find another token.
The issue was that we were not skipping whitespace after finding
such non-token.

Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
This commit is contained in:
Brice Figureau 2009-06-12 13:40:15 +02:00 коммит произвёл James Turnbull
Родитель 5fbf63ce78
Коммит d3323331e9
3 изменённых файлов: 12 добавлений и 1 удалений

Просмотреть файл

@ -417,7 +417,10 @@ class Puppet::Parser::Lexer
final_token, value = munge_token(matched_token, value)
next unless final_token
unless final_token
skip()
next
end
if match = @@pairs[value] and final_token.name != :DQUOTE and final_token.name != :SQUOTE
@expected << match

Просмотреть файл

@ -472,6 +472,10 @@ describe Puppet::Parser::Lexer, "when lexing comments" do
@lexer.getcomment.should == "2\n"
end
it "should skip whitespace before lexing the next token after a non-token" do
@lexer.string = "/* 1\n\n */ \ntest"
@lexer.fullscan.include?([:NAME, "test"]).should be_true
end
end
# FIXME: We need to rewrite all of these tests, but I just don't want to take the time right now.

Просмотреть файл

@ -4,3 +4,7 @@ file {
"/tmp/multilinecomments": content => "pouet"
}
*/
/* and another one for #2333, the whitespace after the
end comment is here on purpose */