These instructions might get edited a bit over the next couple of days. I'll try to flag changes.
As usual, check the write up instructions first.
Especially in the test corpus section, but also in general, it will
be helpful to keep notes along the way as you are doing grammar
development.
Requirements for this assignment
Before making any changes to your grammar for this lab, run a baseline test suite instance. If you decide to add items to your test suite for the material covered here, consider doing so before modifying your grammar so that your baseline can include those examples. (Alternatively, if you add examples in the course of working on your grammar and want to make the snapshot later, you can do so using the grammar you turned in for Lab 6.)
The negation library is more robust than in previous years, so we expect that in most cases the output is working or close to working.
The goal of this section is to parse one more sentence from your test corpus than you are before starting this section. In most cases, that will mean parsing one sentence total. In your write up, you should document what you had to add to get the sentence working. Note that it is possible to get full credit here even if the sentence ultimately doesn't parse by documenting what you still have to get working.
This is a very open-ended part of the lab (even more so than usual), which means: A) you should get started early and post to GoPost so I can assist in developing analyses of whatever additional phenomena you run accross and B) you'll have to restrain yourselves; the goal isn't to parse the whole test corpus this week ;-).
In constructing your testsuite for this phenomenon in a previous lab, you were asked to find the following:
In the following, I'll share the tdl I've developed in a small English grammar, for two possibilities:
Your goal for this part of the lab is to use this as a jumping-off point to handle wh questions as they manifest in your language. Of course, I expect languages to differ in the details, so please start early and post to GoPost so we can work it out together.
Type and entry definitions for my tdl pronouns (used in both versions):
wh-pronoun-noun-lex := norm-hook-lex-item & basic-icons-lex-item & [ SYNSEM [ LOCAL [ CAT [ HEAD noun, VAL [ SPR < >, SUBJ < >, COMPS < >, SPEC < > ] ], CONT [ RELS < ! [ LBL #larg, ARG0 #ind & ref-ind ], [ PRED "wh_q_rel", ARG0 #ind, RSTR #harg ] ! >, HCONS < ! [ HARG #harg, LARG #larg ] ! > ] ], NON-LOCAL.QUE < ! #ind ! > ] ]. what := wh-pronoun-noun-lex & [ STEM < "what" >, SYNSEM.LKEYS.KEYREL.PRED "_thing_n_rel" ]. who := wh-pronoun-noun-lex & [ STEM < "who" >, SYNSEM.LKEYS.KEYREL.PRED "_person_n_rel" ].
topormid-coord-phrase :+ [ SYNSEM.NON-LOCAL #nl, LCOORD-DTR.SYNSEM.NON-LOCAL #nl, RCOORD-DTR.SYNSEM.NON-LOCAL #nl ]. bottom-coord-phrase :+ [ SYNSEM.NON-LOCAL #nl, NONCONJ-DTR.SYNSEM.NON-LOCAL #nl ].
Note that all the phrase structure rules require instances in rules.tdl
basic-head-filler-phrase :+ [ ARGS < [ SYNSEM.LOCAL.COORD - ], [ SYNSEM.LOCAL.COORD - ] > ]. wh-ques-phrase := basic-head-filler-phrase & interrogative-clause & head-final & [ SYNSEM.LOCAL.CAT [ MC bool, VAL #val, HEAD verb & [ FORM finite ] ], HEAD-DTR.SYNSEM.LOCAL.CAT [ MC na, VAL #val & [ SUBJ < >, COMPS < > ] ], NON-HEAD-DTR.SYNSEM.NON-LOCAL.QUE < ! ref-ind ! > ]. extracted-comp-phrase := basic-extracted-comp-phrase & [ SYNSEM.LOCAL.CAT.HEAD verb ]. extracted-subj-phrase := basic-extracted-subj-phrase & [ SYNSEM.LOCAL.CAT.HEAD verb ].
Note that all the phrase structure rules require instances in rules.tdl
wh-int-cl := clause & head-compositional & head-only & [ SYNSEM [ LOCAL.CAT [ VAL #val, MC bool ], NON-LOCAL non-local-none ], C-CONT [ RELS < ! ! >, HCONS < ! ! >, HOOK.INDEX.SF ques ], HEAD-DTR.SYNSEM [ LOCAL.CAT [ HEAD verb & [ FORM finite ], VAL #val & [ SUBJ < >, COMPS < > ] ], NON-LOCAL [ SLASH < ! ! >, REL < ! ! >, QUE < ! ref-ind ! > ] ] ].
The general head-subj type assumes that QUE is empty, which won't fly in this case, so we need to redefine it. In the pseudo-English grammar, I did it this way:
eng-subj-head-phrase := head-valence-phrase & head-compositional & basic-binary-headed-phrase & [ SYNSEM phr-synsem & [ LOCAL.CAT [ POSTHEAD +, HC-LIGHT -, VAL [ SUBJ < >, COMPS #comps, SPR #spr ] ] ], C-CONT [ HOOK.INDEX.SF prop-or-ques, RELS < ! ! >, HCONS < ! ! >, ICONS < ! ! > ], HEAD-DTR.SYNSEM.LOCAL.CAT.VAL [ SUBJ < #synsem >, COMPS #comps, SPR #spr ], NON-HEAD-DTR.SYNSEM #synsem & canonical-synsem & [ LOCAL [ CAT [ VAL [ SUBJ olist, COMPS olist, SPR olist ] ] ], NON-LOCAL [ SLASH 0-dlist & [ LIST < > ], REL 0-dlist ] ]].
... and then had subj-head in rules.tdl instantiate this type instead of subj-head-phrase.
If your language has head-opt-subj, this will need to be rewritten similarly.
For each of the following phenomena, please include the following your write up:
Phenomena:
In addition, your write up should include a statement of the current coverage of your grammar over your test suite (using numbers you can get from Analyze | Coverage and Analyze | Overgeneration in [incr tsdb()]) and a comparison between your baseline test suite run and your final one for this lab (see Compare | Competence).
tar czf lab7.tgz *
(When I download your submission from CollectIt, it comes in a directory named with your UWNetID. The above method avoids extra directory structure inside that directory.)