Lab 6 (Due 2/14, 11:59pm)
Preliminaries
As usual, check the write up instructions first.
There are several places in this lab
where I ask you to contact me if your grammar requires information
not in these instructions. Please read through this lab by class on
Monday, so we can start that conversation in a timely fashion.
Requirements for this assignment
- Make sure you have a baseline test profiles for both test corpus and test suite corresponding to your lab 5 grammar.
- Make improvements to three phenomena, from the following (prioritized) list:
- 1. Sentential complement verbs, with declarative complements
- 2. Matrix questions, both polar and wh-
- 3. Sentential complement verbs, with interrogative complements
- 4. Something else, based on Lab 5 feedback.
- 5. Something else, which will allow you to get one test corpus (linguist-provided item file) working.
- Make sure that your grammar can still generate, and debug if necessary.
- Test your grammar using [incr tsdb()].
[incr tsdb()] should be part of your test-development
cycle. In addition, you'll need to run a final test suite instance
for this lab to submit along with your basline.
- Write up the phenomena you have analyzed.
Before making any changes to your grammar for this lab, run a
baseline test suite instance. If you need to add items to your test
suite (to document the phenomena you are working on), consider doing
so before modifying your grammar so that your baseline can include
those examples. (Alternatively, if you add examples in the course of
working on your grammar and want to make the snapshot later, you can
do so using the grammar you turned in for Lab 5.)
- Read the testsuite instructions for embedded complement clauses and create examples for declarative complements only.
- Be sure your test suite contains negative examples illustrating
matrix clause syntax in embedded clauses and vice versa.
- Visit the customization script to see if it provides what you need for your language for this phenomenon. If so, create a customized grammar with the specifications for complement clauses included, diff that against a grammar customized from your previous choices file, and insert the diffs into your current grammar.
- Check whether the clausal complement verbs are working with declarative
complements from the customization system. If they aren't please post to Canvas so we can debug together.
Check your MRSs
Here is an example to give you an idea of what
we're looking for. (This is the "indexed MRS" view.)
I know that you sleep.
Note the qeq linking the ARG2 position
of _know_v_rel (h9) to the LBL of _sleep_v_rel (h15),
and the SF value of e16 (PROP).
Read the testsuite instructions for Matrix yes-no questions. If there is nothing that distinguishes yes-no questions from delcaratives in your language, then this isn't a good phenomenon to work on this week. If there is, then create testsuite examples according to the directions.
Visit the customization script to see if it provides what you need for your language for this phenomenon. If so, create a customized grammar with the specifications for yes-no questions included, diff that against a grammar customized from your previous choices file, and insert the diffs into your current grammar.
Check that your test items parse and that the MRSes are
correct: Is the value of SF on the INDEX of the
clause ques? (For sentences not marked specifically as
questions, we assume they are actually ambiguous because the
intonation might make them a question, and thus give them
[SF prop-or-ques].w)
If your yes-no question doesn't parse, or if it does but not
with the right semantics, post to Canvas, and we will work out what
needs to be done.
In constructing your testsuite for this phenomenon in a previous
lab, you were asked to find the following:
- The simplest case, where there is a
single clause and one argument is questioned: Who did the child
see?/Who saw the child?
- The shape of the wh words for core arguments. Do these vary with case, animacy, gender, something else?
- The possible positions of wh words: Do they appear where an ordinary argument would? Move to the beginning of the clause? Are both of these possible?
- What happens if the questioned argument belongs to a lower clause (e.g. Who did the observer think the child saw?)?
- Are there any other differences between wh questions and declaratives (or yes-no questions)? (For example, English requires subject-auxiliary inversion in the main clause of a matrix wh-question.)
- Are there are any differences between wh questions concerning subject and non-subject arguments? (For example, English does not do subject-auxiliary inversion if the questioned element is the main clause subject.)
- Optional: What happens with multiple wh elements in the same clause (e.g. Who saw what?)
The goal here is to check the semantics for the analysis you currently have for these, and fix as necessary. We also want to remove any spurious amnbiguity associated with the rules added for wh questions. Examine your test items, with reference to the sample MRSs below, and report on Canvas what needs fixing.
Check your MRSs
Below are some sample MRSs for wh questions, considering both
subject and complement questions as well as matrix and embedded
questions. Please use these as a point of comparison when you
check your MRSs.
Who chases cars?
What do you think the dog chases?
- Read the testsuite instructions for embedded complement clauses and create examples for interrogative complements.
- Check whether the clausal complement verbs can combine with interrogative (yes-no question only) complements appropriately. If they can't, post to Canvas ans we'll work on fixing them.
- Make sure that clausal complement verbs are appropriately constrained to take declarative or interrogative complements or both.
Check your MRSs
I ask who chases the dog.
I ask whether you sleep.
Note the qeq linking the ARG2 position
of _ask_v_rel (h9) to the LBL of _sleep_v_rel (h15),
and the SF value of e16 (QUES).
The goal of this section is to parse one more sentence from your
test corpus than you are before starting this section. In most cases,
that will mean parsing one sentence total. In your write up,
you should document what you had to add to get the sentence working.
Note that it is possible to get full credit here even if the
sentence ultimately doesn't parse by documenting what you worked
on and what you you still
have to get working.
This is a very open-ended part of the lab (even more
so than usual), which means: A) you should get started early
and post to Canvas so I can assist in developing analyses of
whatever additional phenomena you run accross and B) you'll
have to restrain yourselves; the goal isn't to parse the whole
test corpus this week ;-). In fact, I won't have time to support
the extension of the grammars by more than one sentence each,
so please *stop* after one sentence.
- Look for some plausible candidate sentences in your
linguist-provided item file. These should be relatively short and
ideally have minimal additional grammatical phenomena beyond what we
have already covered.
- Examine the lexical items required for your target sentence(s).
Add any that should belong to lexical types you have already
created.
- Try parsing the test corpus again (or just your target sentence
from it).
- If your target sentence parses, check the MRS to see if it is
reasonable.
- If you were able to get a sentence with a correct MRS just by
adding lexical items to lexicon.tdl, please try one more sentence.
- If your target sentence doesn't parse, check to see whether
you still have lexical coverage errors. Fixing these may require
adapting existing lexical rules, adding lexical rules, and/or
adding lexical types. Post to Canvas for assistance.
- If your target sentence doesn't parse but your grammar does
find analyses for each lexical item, then examine the parse chart
to identify the smallest expected constituent that the grammar is
not finding, and debug from there. Do you have the phrase structure
rule that should be creating the constituent? If so, try interactive
unification to see why the expected daughters aren't forming
a phrase with that rule. Do you need to add a phrase structure rule?
Again, post to Canvas for assistance.
- Iterate until either the sentence parses or you at least have a clear
understanding of what you would need to add to get it parsing.
- Run your full test suite and test corpus after any changes you make to your
grammar to make sure you aren't breaking previous coverage/introducing spurious ambiguity.
Test your grammar
- Use your test suite to check the syntactic coverage of your grammar.
- Examine the semantic representations you assign to each of
the clause types.
- Check for overgeneration (syntactic forms associated with
one clause type showing up in other clause types, multiple parses
for single sentences with spurious clause type assignments or
lack of clausal semantics).
- Make sure your grammar still generates.
Write up your analyses
For each of the phenomena you worked on, please include
the following your write up:
- A descriptive statement of the facts of your language. (You should have already written such a statement in Lab 4, but please include it here so I can follow what is happening. If you understanding of the facts of the language has evolved in the meantime, please update the description appropriately.)
- Illustrative IGT examples from your testsuite.
- A statement of how you implemented the phenomenon (in terms of types you added/modified and particular tdl constraints). (Yes, I want to see actual tdl snippets.)
- If the analysis is not (fully) working, a description of the problems
you are encountering.
- Matrix yes-no questions.
- Embedded complement clauses (declarative and interrogative).
- Wh questions
- Phenomena required for test corpus sentence
- Anything you had to do to get generation working / where you are stuck (if you are) in getting generation working.
In addition, your write up should include a statement of the current
coverage of your grammar over your test suite (using numbers you can
get from Analyze | Coverage and Analyze | Overgeneration in [incr tsdb()])
and a comparison between your baseline test suite run and your final
one for this lab (see Compare | Competence).
Back to main course page
Last modified: