The goal of this lab is to take the grammar we got to last week and improve it for more phenomena, doing a little bit of further testsuite development. In particular, you will add testsuite coverage for one more phenomenon, and improve the choices file for three phenomena. You'll also be using [incr tsdb()] to test the resulting grammar and compare it to your final grammar from last week (= starting point for this week).
Choose 3 additional phenomenon to add to your testsuite, from among:
Add examples to your testsuite, according to
the general instructions for
testsuites and the formatting
instructions, illustrating the phenomena you worked on above. The
testsuite should have both positive and negative examples, but doesn't
need to be exhaustive (since we're working with test corpora this
year), but you'll want both positive and negative examples for each of
the phenomena you work on in this section. I expect these testsuites
to have about 20-30 new examples from this week, though you
can do more if you find that useful. All examples should be simple enough that your grammar can parse them or fails to parse them because of the one thing that's wrong with them.
Create a test suite skeleton
- Make a subdirectory called lab4 inside
tsdb/skeletons for your test suite.
- Edit tsdb/skeletons/Index.lisp to include a line for this
directory, e.g.:
(
((:path . "matrix") (:content . "matrix: A test suite created automatically from the test sentences given in the Grammar Matrix questionnaire."))
((:path . "corpus") (:content . "IGT provided by the linguist"))
((:path . "lab4") (:content . "Test suite collected for Lab2 2-4."))
)
- Copy the .item file which is output by make_item
to tsdb/skeletons/lab4/item.
- Copy tsdb/skeletons/Relations to tsdb/skeletons/lab4/relations (notice the change from R to r).
For further detailed instructions, see Lab 3.
Back to top
Improve the choices file for three phenomena
For the three phenomena you chose above, refine the choices file by hand. Please be sure to post lots of questions on Canvas as you work on this!
Make sure you can parse individual sentences
Once you have created your starter grammar (or each time you
create one, as you should iterate through grammar creation and
testing a few times as you refine your choices), try it out on a
couple of sentences interactively to see if it works:
- Load the grammar into the LKB.
- Using the parse dialog box (or 'C-c p' in emacs to get the parse
command inserted at your prompt), enter a sentence to parse.
- Examine the results. If it does parse, check out the semantics (pop-up menu on the little trees). If it doesn't look at the parse chart to see why not.
- Problems with lexical rules and lexical entries often become apparent here, too: If the LKB can't find an analysis for one of your words, it will say so, and (obviously) fail to parse the sentence.
Note that the questionnaire has a section for test sentences. If
you use this, then the parse dialog will be pre-filled with your test sentences.
Back to top
Run both the test corpus and the testsuite
Following the same procedure as usual, do test runs over both the testsuite and the test corpus.
Again, collect the following information to provide in your write up:
- How many items parsed?
- What is the average number of parses per parsed item?
- How many parses did the most ambiguous item receive?
- What (new or different) sources of ambiguity can you identify, if any?
Back to top
Write up
Your write up should be a plain text file (not .doc, .rtf or .pdf)
which includes the following:
- Your answers to the questions about the initial and final [incr tsdb()] runs, for both test corpus and test suite, repeated here:
- How many items parsed?
- What is the average number of parses per parsed item?
- How many parses did the most ambiguous item receive?
- What sources of ambiguity can you identify?
- Documentation of the phenomena you have added to your testsuite,
illustrated with examples from the testsuite.
- Documentation of the choices you made in the customization
system, illustrated with examples from your test suite.
- This can be interleaved with the documentation of the phenomena
(so you describe each phenomenon and then the choices you used to add
an analysis of it to the grammar), but the documentation of the phenomenon and choices should be logically separate. Here's an example of what this should look like.
- If you're implementing phenomena that you put into the testsuite last week, please copy over the description of the phenomena from last week to provide context for your description of your implementation.
- Descriptions of any properties of your language illustrated
in your test suite but not covered by your starter grammar and/or
the customization system. (This can be brief if you like: e.g. "We added wh questions to the test suite, but didn't work on implementing them this week.")
- If you have identified ways (other than those you already reported) in which the automatically created choices file is particularly off-base, please report them here. If you can include IGT from the testsuite or your descriptive materials illustrating the problem, that is even better.
Back to top
Back to top
Back to course page
Last modified: