Lab 5 (due 5/3 11:59 pm)
This will be our last lab with the customization system. The goal is once again to improve the grammar from last week by extending it for additional phenomena. There are two different ways to do this:
- As in the last two weeks, pick three new phenomena and document them in your testsuite and extend your choices file to account for them. This will be 3 + 3 (testsuite, choices)
- If you have phenomena documented in your testsuite not yet accounted for by your grammar, then you can work on those instead. In this case you might do less testsuite extension, but I still expect six total tasks.
Notes: If possible, I'd like every group to do clausal complements and clausal modifiers. Also, as with e.g. aspect you don't have to do every single variation on these phenomena, just some.
As before, you'll also be using [incr tsdb()] to test the resulting grammar and compare it to your starting point from last week.
New phenomena:
Previous phenomena you may not have completed:
Back to top
Initial testsuite run
- Create and run initial testsuite instances for both the linguist-provided data and your small testsuite, using the initial grammar.
Note If your tsdb/ directory is inside a shared folder on VirtualBox, it will not work.
- For each of these, explore the results, collect the following information to provide in your write up:
- How many items parsed?
- What is the average number of parses per parsed item?
- How many parses did the most ambiguous item receive?
- What sources of ambiguity can you identify?
- For 4 items, do any of the parses look reasonable in the semantics?
Back to top
Create a small testsuite for your additional phenomena
Add examples to your testsuite, according to
the general instructions for
testsuites and the formatting
instructions, illustrating the phenomena you worked on above. The
testsuite should have both positive and negative examples, but doesn't
need to be exhaustive (since we're working with test corpora this
year), but you'll want both positive and negative examples for each of
the phenomena you work on in this section. I expect these testsuites
to have about 20-30 examples total by the end of this week, though you
can do more if you find that useful. All examples should be simple enough that your grammar can parse them or fails to parse them because of the one thing that's wrong with them.
Create a test suite skeleton
- Make a subdirectory called lab5 inside
tsdb/skeletons for your test suite.
- Edit tsdb/skeletons/Index.lisp to include a line for this
directory, e.g.:
(
((:path . "matrix") (:content . "matrix: A test suite created automatically from the test sentences given in the Grammar Matrix questionnaire."))
((:path . "corpus") (:content . "IGT provided by the linguist"))
((:path . "lab5") (:content . "Test suite collected for Lab 5."))
)
- Download the python script make_item, make
sure it is executable, and run it on your test suite:
make_item testsuite.txt
Notes on make_item:
- This script is going to be pretty picky about the format
of your test suite. If you have questions, please post to Canvas (10 minute rule!).
- It requires python3, which is on the current version of the Ubuntu+LKB appliance.
- Alternatively, you can copy your testsuite and make_item over to patas and run there, or install python3 (from http://python.org/download) on your host OS (mac or windows), and run make_item outside VirtualBox.
- If the above command is successful,
testsuite.txt.item
would be created in the working directory. If the testsuite contains errors, it's possible that a lot of output will appear on stderr. It maybe useful to redirect this into a file that you can use to go through
and correct the errors one at a time. For example:
./make_item testsuite.txt item 2>errs
The command just above attempts to create 'item' in the working directory, and stderr messages are redirected to the file 'errs'.
make_item
contains a default mapping from testsuite line types into particular fields of the [incr_tsdb()]
item file. The default mapping puts 'orth' into 'i-input', the field which the is the input to the grammar. If your grammar targets a different testsuite line, override the default mapping with the -m
/--map
option.
./make_item --map orth-seg i-input testsuite.txt item
The invocation above maps the orth-seg
line into the input field.
You can run make_item
with -h
/--help
to see a summary of the options.
- Copy the .item file which is output by make_item
to tsdb/skeletons/lab5/item.
- Copy tsdb/skeletons/Relations to tsdb/skeletons/lab5/relations (notice the change from R to r).
Back to top
Improve the choices file for your selected phenomena
For the phenomena you chose, refine the choices file by hand. Please be sure to post lots of questions on Canvas as you work on this!
Make sure you can parse individual sentences
Once you have created your starter grammar (or each time you
create one, as you should iterate through grammar creation and
testing a few times as you refine your choices), try it out on a
couple of sentences interactively to see if it works:
- Load the grammar into the LKB.
- Using the parse dialog box (or 'C-c p' in emacs to get the parse
command inserted at your prompt), enter a sentence to parse.
- Examine the results. If it does parse, check out the semantics (pop-up menu on the little trees). If it doesn't look at the parse chart to see why not.
- Problems with lexical rules and lexical entries often become apparent here, too: If the LKB can't find an analysis for one of your words, it will say so, and (obviously) fail to parse the sentence.
Note that the questionnaire has a section for test sentences. If
you use this, then the parse dialog will be pre-filled with your test sentences.
Back to top
Run both the test corpus and the testsuite
Following the same procedure as usual, do test runs over both the testsuite and the test corpus.
Again, collect the following information to provide in your write up:
- How many items parsed?
- What is the average number of parses per parsed item?
- How many parses did the most ambiguous item receive?
- What sources of ambiguity can you identify?
- For 4 newly parsing or otherwise fixed items, do any of the parses look reasonable in the semantics?
Back to top
Write up
NB: While the test suite and choices file creation
is joint work, the write up should be done by one partner (the
other will get a turn next week). The writing partner should
have the non-writing partner review the write up and make suggestions.
Your write up should be a plain text file (not .doc, .rtf or .pdf)
which includes the following:
- Your answers to the questions about the initial and final [incr tsdb()] runs, for both test corpus and test suite, repeated here:
- How many items parsed?
- What is the average number of parses per parsed item?
- How many parses did the most ambiguous item receive?
- What sources of ambiguity can you identify?
- For 4 items, do any of the parses look reasonable in the semantics?
- Documentation of the phenomena you have added to your testsuite,
illustrated with examples from the testsuite.
- Documentation of the choices you made in the customization
system, illustrated with examples from your test suite.
- This can be interleaved with the documentation of the phenomena
(so you describe each phenomenon and then the choices you used to add
an analysis of it to the grammar), but the documentation of the phenomenon and choices should be logically separate. Here's an example of what this should look like.
- Descriptions of any properties of your language illustrated
in your test suite but not covered by your starter grammar and/or
the customization system.
- If you have identified ways (other than those you already reported) in which the automatically created choices file is particularly off-base, please report them here. If you can include IGT from the testsuite or your descriptive materials illustrating the problem, that is even better.
Back to top
Back to top
Back to course page
Last modified: