Skip to main content
edited body
Source Link

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. AxAn example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).

One of the prime advantages of the parser combinator approach is that it provides a DSL for defining grammars, which is hosted in the programming language you're already using. This itself has benefits, such as making the semantic actions associated with your grammar production rules subject to the type-checking provided by the host language - i.e. your rules have to be well-typed. Another benefit is being able to extend the DSL with your own combinators.

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. Ax example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).

One of the prime advantages of the parser combinator approach is that it provides a DSL for defining grammars, which is hosted in the programming language you're already using. This itself has benefits, such as making the semantic actions associated with your grammar production rules subject to the type-checking provided by the host language - i.e. your rules have to be well-typed. Another benefit is being able to extend the DSL with your own combinators.

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. An example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).

One of the prime advantages of the parser combinator approach is that it provides a DSL for defining grammars, which is hosted in the programming language you're already using. This itself has benefits, such as making the semantic actions associated with your grammar production rules subject to the type-checking provided by the host language - i.e. your rules have to be well-typed. Another benefit is being able to extend the DSL with your own combinators.

added 463 characters in body
Source Link

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. Ax example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).

One of the prime advantages of the parser combinator approach is that it provides a DSL for defining grammars, which is hosted in the programming language you're already using. This itself has benefits, such as making the semantic actions associated with your grammar production rules subject to the type-checking provided by the host language - i.e. your rules have to be well-typed. Another benefit is being able to extend the DSL with your own combinators.

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. Ax example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. Ax example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).

One of the prime advantages of the parser combinator approach is that it provides a DSL for defining grammars, which is hosted in the programming language you're already using. This itself has benefits, such as making the semantic actions associated with your grammar production rules subject to the type-checking provided by the host language - i.e. your rules have to be well-typed. Another benefit is being able to extend the DSL with your own combinators.

Source Link

Many grammars have mutually recursive production rules, JSON being a prime example. It is not a mark of a bad design.

With regards to using parser combinator frameworks, if the host language allows mutually recursive definitions (e.g. Haskell) , then there is no problem. In languages which do not (e.g. Java), then one trick is to allow for lazily initialised parsers - I used this in my framework. Ax example of it's usage, for a JSON grammar, is here.

The lazily initialised parser is jvalue:

private static final Parser.Ref<Character, Node> jvalue = Parser.ref(); 

which is declared at the beginning of the grammar, and initialised at the end:

static { jvalue.set( choice( jnull, jbool, jnumber, jtext, jarray, jobject ).label("JSON value") ); } 

The Parser.Ref type is itself a Parser (i.e. it implements the Parser interface).