A Rule-Based Style and Grammar Checker


Diploma Thesis, 2003

76 Pages


Excerpt


Table of Contents

1 Introduction

2 Theoretical Background
2.1 Part-of-Speech Tagging
2.2 Phrase Chunking
2.3 Grammar Checking
2.3.1 Grammar Errors
2.3.2 Sentence Boundary Detection
2.4 Controlled Language Checking
2.5 Style Checking
2.6 False Friends
2.7 Evaluation with Corpora
2.7.1 British National Corpus
2.7.2 Mailing List Error Corpus
2.7.3 Internet Search Engines
2.8 Related Projects
2.8.1 Ispell and Aspell
2.8.2 Style and Diction
2.8.3 EasyEnglish
2.8.4 Critique
2.8.5 CLAWS as a Grammar Checker
2.8.6 GramCheck
2.8.7 Park et al’s Grammar Checker
2.8.8 FLAG

3 Design and Implementation
3.1 Class Hierarchy
3.2 File and Directory Structure
3.3 Installation
3.3.1 Requirements
3.3.2 Step-by-Step Installation Guide
3.4 Spell Checking
3.5 Part-of-Speech Tagging
3.5.1 Constraint-Based Extension
3.5.2 Using the Tagger on the Command Line
3.5.3 Using the Tagger in Python Code
3.5.4 Test Cases
3.6 Phrase Chunking
3.7 Sentence Boundary Detection
3.8 Grammar Checking
3.8.1 Rule Features
3.8.2 Rule Development
3.8.3 Testing New Rules
3.8.4 Example: Of cause Typo Rule
3.8.5 Example: Subject-Verb Agreement Rule
3.8.6 Checks Outside the Rule System
3.9 Style Checking
3.10 Language Independence
3.11 Graphical User Interfaces
3.11.1 Communication between Frontend and Backend
3.11.2 Integration into KWord
3.11.3 Web Frontend
3.12 Unit Testing

4 Evaluation Results 50
4.1 Part-of-Speech Tagger
4.2 Sentence Boundary Detection
4.3 Style and Grammar Checker
4.3.1 British National Corpus
4.3.2 Mailing List Errors Corpus
4.3.3 Performance

5 Conclusion

6 Acknowledgments

7 Bibliography

A Appendix
A.1 List of Collected Errors
A.1.1 Document Type Definition
A.1.2 Agreement Errors
A.1.3 Missing Words
A.1.4 Extra Words
A.1.5 Wrong Words
A.1.6 Confusion of Similar Words
A.1.7 Wrong Word Order
A.1.8 Comma Errors
A.1.9 Whitespace Errors
A.2 Error Rules
A.2.1 Document Type Definition
A.2.2 Grammar Error Rules
A.2.3 Style/Word Rules
A.2.4 English/German False Friends
A.3 Penn Treebank Tag Set to BNC Tag Set Mapping
A.4 BNC Tag Set
A.4.1 List of C5 Tags
A.4.2 C7 to C5 Tag Set Mapping

1 Introduction

The aim of this thesis is to develop an Open Source style and grammar checker for the English language. Although all major Open Source word processors offer spell checking, none of them offer a style and grammar checker feature. Such a feature is not available as a separate free program either. Thus the result of this thesis will be a free program which can be used both as a stand-alone style and grammar checker and as an integrated part of a word processor.

The style and grammar checker described in this thesis takes a text and returns a list of possible errors. To detect errors, each word of the text is assigned its part-of-speech tag and each sentence is split into chunks, e.g. noun phrases. Then the text is matched against all the checker’s pre-defined error rules. If a rule matches, the text is supposed to contain an error at the position of the match. The rules describe errors as patterns of words, part-of-speech tags and chunks. Each rule also includes an explanation of the error, which is shown to the user.

The software will be based on the system I developed previously [Naber]. The existing style and grammar checker and the part-of-speech tagger which it requires will be re-implemented in Python. The rule system will be made more powerful so that it can be used to express rules which describe errors on the phrase level, not just on the word level. The integration into word processors will be improved so that errors can be detected on-the-fly, i.e. during text input. For many errors the software will offer a correction which can be used to replace the correct text with a single mouse click.

The system’s rule-based approach is simple enough to enable users to write their own rules, yet it is powerful enough to catch many typical errors. Most rules are expressed in a simple XML format which not only describes the errors but also contains a helpful error message and example sentences. Errors which are too complicated to be expressed by rules in the XML file can be detected by rules written in Python. These rules can also easily be added and do not require any modification of the existing source code.

An error corpus will be assembled which will be used to test the software with real errors. The errors will be collected mostly from mailing lists and websites. The errors will be categorized and formatted as XML. Compared to the previous version, many new rules will be added which detect typical errors found in the error corpus.

To make sure that the software does not report too many errors for correct text it will also be tested with the British National Corpus (BNC). The parts of the BNC which were taken from published texts are supposed to contain only very few grammar errors and thus should produce very few warning messages when checked with this software.

There have been several scientific projects working on style and grammar checking (see section 2.8), but none are publicly available. This thesis and the software is available as Open Source software at http://www.danielnaber.de/languagetool/.

2 Theoretical Background

Style and grammar checking are useful for the same reason that spell checking is useful: it helps people to write documents with fewer errors, i.e. better documents. Of course the style and grammar checker needs to fulfill some requirements to be useful:

It should be fast, i.e. fast enough for interactive use.

It should be well integrated into an existing word processor.

Not too often should it complain about sentences which are in fact correct. It should be possible to adopt it to personal needs.

And finally: it should be as complete as possible, i.e. it should find most errors in a text.

The many different kinds of errors which may appear in written text can be categorized in several different ways. For the purpose of this thesis I propose the following four categories:

Spelling errors: This is defined as an error which can be found by a common spell checker software. Spell checkers simply compare the words of a text with a large list of known words. If a word is not in the list, it is considered incorrect. Similar words will then be suggested as alternatives.

Example: *Gemran1(Ispell will suggest, among others, German)

Grammar errors: An error which causes a sentence not to comply with the English grammar rules. Unlike spell checking, grammar checking needs to make use of context information, so that it can find an error like this:

*Harry Potter bigger then than Titanic?2

Whether this error is caused by a typo or whether it is caused my a misunderstanding of the words then and than in the writer’s mind usually cannot be decided. This error cannot be found by a spell checker because both then and than are regular words. Since the use of then is clearly wrong here, this is considered a grammar error.

Grammar errors can be divided into structural and non-structural errors. Structural errors are those which can only be corrected by inserting, deleting, or moving one or more words. Non- structural errors are those which can be corrected by replacing an existing word with a different one.

Style errors: Using uncommon words and complicated sentence structures makes a text more difficult to understand, which is seldomly desired. These cases are thus considered style errors. Unlike grammar errors, it heavily depends on the situation and text type which cases should be classified as a style error. For example, personal communication via email among friends allows creative use of language, whereas technical documentation should not suffer from ambiguities. Configurability is even more important for style checking than for grammar checking.

Example: But it [= human reason] quickly discovers that, in this way, its labours must remain ever incomplete, because new questions never cease to present themselves; and thus it finds itself compelled to have recourse to principles which transcend the region of experience, while they are regarded by common sense without distrust.

This sentence stems from Kant’s Critique of pure reason. It is 48 words long and most people

will agree that it is very difficult to understand. The reason is its length, difficult vocabulary (like transcend), and use of double negation (without distrust). With today’s demand for easy to understand documents, this sentence can be considered to have a style problem.

Semantic errors: A sentence which contains incorrect information which is neither a style error, grammar error, nor a spelling error. Since extensive world-knowledge is required to recognize semantic errors, these errors usually cannot be detected automatically.

Example: MySQL is a great editor for programming!

This sentence is neither true nor false – it simply does not make sense, as MySQL is not an editor, but a database. This cannot be known, however, without extensive world knowledge. World knowledge is a form of context, too, but it is far beyond what software can understand today.

I will not make a distinction between errors and mistakes in this thesis, instead I will simply use the term error for all parts of text which can be considered incorrect or poorly written.

Grammar (or syntax) refers to a system of rules describing what correct sentences have to look like. Somehow these rules exist in people’s minds so that for the vast majority of sentences people can easily decide whether a sentence is correct or not. It is possible to make up corner cases which make the intuitive correct/incorrect decision quite difficult. A software might handle these cases by showing a descriptive warning message to the user instead of an error message. Warning messages are also useful for style errors and for grammar errors if the grammar rule is known to also match for some correct sentences.

2.1 Part-of-Speech Tagging

Part-of-speech tagging (POS tagging, or just tagging) is the task of assigning each word its POS tag. It is not strictly defined what POS tags exist, but the most common ones are noun, verb, determiner, adjective and adverb. Nouns can be further divided into singular and plural nouns, verbs can be divided into past tense verbs and present tense verbs and so on.

The more POS tags there are, the more difficult it becomes – especially for an algorithm – to find the right tag for a given occurrence of a word, since many words can have different POS tags, depending on their context. In English, many words can be used both as nouns and as verbs. For example house (a building) and house (to provide shelter). Only about 11,5% percent of all words are ambiguous with regard to their POS tags, but since these are the more often occurring words, 40% percent of the words in a text are usually ambiguous [Harabagiu]. POS tagging is thus a typical disambiguation problem: all possible tags of a word are known and the appropriate one for a given context needs to be chosen.

Even by simply selecting the POS tag which occurs most often for a given word – without taking context into account – one can assign 91% of the words their correct tag. Taggers which are mostly based on statistical analysis of large corpora have an accuracy of 95-97% [Brill, 1992]. Constraint Grammar claims to have an accuracy of more than 99% [Karlsson, 1995].

One has to be cautious with the interpretation of these percentages: some taggers give up on some ambiguous cases and will return more than one POS tag for a word in a given context. The chances that at least one of the returned POS tags is correct is obviously higher if more than one POS tag is returned. In other words, one has to distinguish between recall and precision (see section 2.7).

In the following I will describe Qtag 3.1 [Mason] as an example of a purely probabilistic tagger. Qtag is written in Java3 and it is freely available for non-commercial use. The source code is not

available, but the basic algorithm is described in [Tufis and Mason, 1998]. Qtag always looks at a part of the text which is three tokens wide. Each token is looked up in a special dictionary which was built using a tagged corpus. This way Qtag finds the token’s possible tags. If the token is not in the dictionary a heuristic tries to guess the token’s possible POS tags by looking for common suffixes at the token’s end. If the token is in the dictionary then for each possible tag two probabilities are calculated: the probability of the token having the specified tag, and the context probability that the tag is preceded and followed by the tags of the words to the left and to the right. A joint probability is then calculated and the POS tag with the highest joint probability is chosen. An example can be found in section 3.5.

Qtag itself can be used for any language, the only thing one needs is a large tagged corpus so Qtag can learn the word/tag probabilities and the tag sequence frequencies. The Qtag program is only 20 KB in size.

Constraint Grammar is an example of a purely rule-based tagger. It is not freely available, but it is described in [Karlsson, 1995]. Like Qtag it starts by assigning each word all its possible tags from a dictionary. The rules erase all tags which lead to illegal tag sequences. All the rules are hand-written, which made the development of the Constraint Grammar a time-consuming and difficult task. The result is a tagger which has an accuracy of more than 99%.

As developing rules manually is difficult, there have been attempts to learn the rules automatically (some papers are quoted in [Lindberg and Eineborg, 1999]). Brill’s tagger is such an attempt [Brill, 1992]. It also starts by assigning each token all possible tags. It then tags a tagged corpus by assigning each token from the corpus its most probable tag, without taking context into account. The assigned tags are compared to the real tags and each mistagging is counted. Brill’s tagger now tries to come up with rules (called patches) which repair these errors. A rule usually says something like ”if a token has tag A, but it it followed by tag X, then make it tag B”.

With 71 of these automatically built rules the system reaches an error rate of 5.1% which corresponds to a recall of 94.9%. Although Brill states that this result is difficult to compare to the results of other publications, he concludes that his rule-based tagger offers a performance similar to probabilistic taggers. One advantage of rule-based taggers is their compact representation of knowledge – 71 rules against several thousand values required by a probabilistic tagger. With today’s computer power this has become less of a problem. But the smaller number of rules is also supposed to make enhancements to the system easier.

Both probabilistic and rule-based taggers need additional knowledge to approach a recall of 100%. [Lindberg and Eineborg, 1999] report promising results with adding linguistic knowledge to their tagger. Probabilistic taggers are said to have some advantages over rule-based ones: they are language independent, and there is no need to manually code rules for them. A discussion about these alleged advantages can be found in [Kiss, 2002].

One common problem is the tagging of idioms and phrases. For example, New York should be tagged as a noun for most applications, not as a sequence of adjective, noun. This of course is easy to achieve for many cases when the tagger is trained with a corpus which has the appropriate markup for such phrases.

2.2 Phrase Chunking

Phrase Chunking is situated between POS tagging and a full-blown grammatical analysis: whereas POS tagging only works on the word level, and grammar analysis (i.e. parsing) is supposed to build a tree structure of a sentence, phrase chunking assigns a tag to word sequences of a sentence.

Typical chunks are noun phrase (NP) and verb phrase (VP). Noun phrases typically consist of deter-

Abbildung in dieser Leseprobe nicht enthalten

information provided in the paper should still be valid.

miners, adjectives and nouns or pronouns. Verb phrases can consist of a single verb or of an auxiliary verb plus infinitive. For example, the dog, the big dog, the big brown dog are all examples of noun phrases. As the list of adjectives can become infinitely long, noun phrases can theoretically grow wi- thout a limit. However, what is called noun phrase here is just an example and just like in POS tagging everybody can make up his own chunk names and their meanings. The chunks found by a chunker do not necessarily need to cover the complete text – with only noun and verb phrases, as usually defined, this is not possible anyway.

Chunking works on a POS tagged text just like POS tagging works on words: either there are hand- written rules that describe which POS tag sequences build which chunks, or a probabilistic chunker is trained on a POS tagged and chunked text. These methods can be combined by transferring the knowledge of a probabilistic chunker to rules.

As chunking requires a POS tagged text, its accuracy cannot be better than that of the POS tagger used. This is backed by the fact that even the best chunker listed on [Chunking] reaches a precision and recall of 94%, which is less than an average tagger can achieve. [Chunking] also lists many papers about chunking.

2.3 Grammar Checking

It turns out there are basically three ways to implement a grammar checker. I will refer to them with the following terms:

Syntax-based checking, as described in [Jensen et al, 1993]. In this approach, a text is com- pletely parsed, i.e. the sentences are analyzed and each sentence is assigned a tree structure. The text is considered incorrect if the parsing does not succeed.

Statistics-based checking, as described in [Attwell, 1987]. In this approach, a POS-annotated corpus is used to build a list of POS tag sequences. Some sequences will be very common (for example determiner, adjective, noun as in the old man), others will probably not occur at all (for example determiner, determiner, adjective). Sequences which occur often in the corpus can be considered correct in other texts, too, uncommon sequences might be errors.

Rule-based checking, as it is used in this project. In this approach, a set of rules is matched against a text which has at least been POS tagged. This approach is similar to the statistics-based approach, but all the rules are developed manually.

The advantage of the syntax-based approach is that the grammar checking is always complete if the grammar itself is complete, i.e. the checker will detect any incorrect sentence, no matter how obscure the error is. Unfortunately, the checker will only recognize that the sentence is incorrect, it will not be able to tell the user what exactly the problem is. For this, extra rules are necessary that also parse ill-formed sentences. If a sentence can only be parsed with such an extra rule, it is incorrect. This technique is called constraint relaxation.

However, there is a major problem with the syntax-based approach: it requires a complete grammar which covers all types of texts one wants to check. Although there are many grammar theories, there is still no robust broad-coverage parser publicly available today. Also, parsers suffer from natural language ambiguities, so that usually more than one result is returned even for correct sentences.

Statistics-based parsers, on the other hand, bear the risk that their results are difficult to interpret: if there is a false alarm error by the system, the user will wonder why his input is considered incorrect, as there is no specific error message. Even developers would need access to the corpus on which the system was trained in order to understand the system’s judgment. Another problem is that someone

has to set a threshold which separates the uncommon but correct constructs from the uncommon and incorrect ones. Surely this task could be passed on to the user who would have to set some value between, say, 0 and 100. The idea of a threshold does however not really comply with the perception that sentences are – besides questions of style and constructed corner cases – usually either correct or incorrect.

Due to said problems with the other approaches a strictly rule-based system will be developed in this thesis. Unlike a syntax-based checker, a rule-based checker will never be complete, i.e. there will always be errors it does not find. On the other hand, it has many advantages:

A sentence does not have to be complete to be checked, instead the software can check the text while it is being typed and give immediate feedback.

It is easy to configure, as each rule has an expressive description and can be turned on and off individually.

It can offer detailed error messages with helpful comments, even explaining grammar rules.

It is easily extendable by its users, as the rule system is easy to understand, at least for many simple but common error cases.

It can be built incrementally, starting with just one rule and then extending it rule by rule.

2.3.1 Grammar Errors

The number of grammar rules is extensive, even for a rather simple language like English [English G, 1981]. I will only describe very few of these grammar rules. Although English will be used for all example sentences, similar rules exist in other languages, too. The grammar rules described here are based on sentences from the corpus which violate these rules (see section A.1).

Subject-Verb Agreement In English, subject and verb have to agree with respect to number and person. For example, in *They is my favourite Canadian authors4 , subject and verb disagree in number (they = plural, is = singular). In *He am running for president, subject and verb disagree in person (he = third person, am = first person of to be).

This of course is a rather simple case. Taking the perspective of a rule-based checker, which interprets the text as a sequence of tokens with POS tags, there are several special cases:

1. Subject and verb are separated, i.e. the verb does not occur directly after the subject: * The characters in Shakespeare’s Twelfth Night lives in a world that has been turned upside-down.
2. The subject can be a compound subject: *Christie and Prin is characters from Laurence’s The Diviners.
3. Book titles are singular: * Salman Rushdie’s Midnight’s Children are my favourite novel.

Agreement between Indefinite Article and the Following Word If the indefinite article is fol- lowed by a word whose pronunciation starts with a vowel sound, an has to be used instead of a. Software can guess a word’s pronunciation by looking at its first letter. If it is one of a, e, i, o, u, the word probably starts with a vowel – but there are exceptions. Here are some examples where the a,e,i,o,u rule applies, together with the correct indefinite article:

a test, a car, a long talk

an idea, an uninteresting speech, an earthquake

Here are some exceptions:

a university, a European initiative an hour, an honor

Tag questions (..., isn’t it? etc) A tag question is often used in spoken language to obtain affirmati- on for a given statement. It is built by attaching a negated form of an auxiliary verb and the sentence’s subject to the end of the sentence. For example, It’s warm today becomes It’s warm today, isn’t it?. When the verb is already negated, it has to be attached in its non-negated form, as in It wasn’t very difficult, was it?

These tag questions are also used in email communication. For native German speakers who are not yet proficient in English they are difficult to master, as their German equivalent is much easier to use

– one can just attach ..., oder? to the end of the sentence, no matter what subject and verb is used. Sometimes this is incorrectly directly translated into English, i.e. ..., or? is attached to a sentence.

Other Errors Many other errors are technically grammar errors, but are caused by a typo. Often the error suggests that it was caused by editing existing text but missing some words:

* Someone suggested said that it worked for him after he updated to Kernel 2.4.20.

The author of this sentence obviously wanted to replaced said by suggested but then forget to delete

said (or vice versa).

Often similar words are mixed up and it is not possible to tell if the writer has made a typo or if he is not aware of the difference:

* Than my old email is nonsense.

Not surprisingly, the confusion between than and then also happens vice versa:

* It’s less controversial then one would think.

2.3.2 Sentence Boundary Detection

Grammaticality refers to sentences. One could argue that the following two sentences have a grammar error, because there is no agreement between the proper noun Peter and the personal pronoun she:

Peter leaves his house. She feels good.

We will not take such cases into account and simply define that this is an error on the semantic level. Instead we will focus on the question what a sentence is. Human readers have an intuitive understanding of where a sentence starts and where it ends, but it is not that simple for computers. Just splitting a string at all the places where a period occurs is not enough, as the following artificial sentences show:

This is a sentence by Mr. Smith from the U.S.A. This is another sentence... A third one; using a semicolon? And yet another one, containing the number 15.45. ”Here we go!”, he said.

Abbreviations, numbers and indirect speech are the problems here which make the task non-trivial for computers. [Walker et al, 2001] have evaluated three approaches to sentence boundary detection. Their focus is on automatic translation systems which require sentence boundary detection on their input. The three approaches are:

The direct implementation into a translation system, without using a higher level of description than the words and punctuation itself. This system uses an abbreviation lexicon. The imple- mentation is inspired by regular expressions, but everything is directly implemented in a given programming language.

The rule-based representation of sentences as regular expressions. The regular expressions al- low to encode the necessary knowledge in a declarative way. Although it is not explicitly men- tioned, this method seems to use an abbreviation lexicon, too.

The application of a machine learning algorithm which is trained on a tagged corpus. The algorithm weights several features of a potential sentence boundary like capitalization and oc- currence in the abbreviation lexicon.

The evaluation by [Walker et al, 2001] shows that the machine learning algorithm offers both a preci- sion and a recall of about 98%. The method based on regular expression yields about 96% precision and recall. The direct implementation reaches 98% precision but only 86% recall.

So even with the best sentence boundary detection, a style and grammar checker must still cope with an error rate of about 2% in the sentence boundaries. This is a bad problem for a parser which is supposed to work on those incorrectly chunked sentences, because any incorrectly added sentence boundary and any missed sentence boundary will almost always lead to unparseable input. For a rule- based system this is less of a problem: only rules which explicitly refer to sentence boundaries will be affected. These rules might incorrectly be triggered when a sentence boundary was incorrectly added, and they might remain untriggered when a sentence boundary was missed by the sentence boundary detection.

2.4 Controlled Language Checking

A controlled language is a natural language which is restricted by rules aiming at making the language simpler and thus easier to understand [Wojcik and Hoard, 1996]. In other words, a controlled language is a subset of a natural language. The most important aspect in developing a controlled language is to avoid ambiguity on all linguistic levels. A language without ambiguity has two important advantages over the common natural languages:

It is easier to understand, especially for people who are not native speakers of the original natural language. Even for native speakers the chance of misunderstandings is reduced. This does not only make reading documents easier, it can also be vitally important, for example in documents which describe the maintenance of airplane engines. Actually AECMA Simplified English [AECMA] is used by the aerospace industries for exactly that purpose.

It is easier to be parsed by a computer, thus being easier to be translated automatically. This promises great savings in the translation process, which is a difficult and expensive task when done completely manually.

The creation of controlled language documents can be supported by software which detects usage of unapproved terms and language constructs. [R.-Bustamente et al, 2000] describes the coverage of some of these controlled language checkers and notes that they have many goals in common with style and grammar checkers. The main difference is that when style and grammar checkers work on unrestricted text they will have to cope with unknown words and complicated sentences. Controlled language checkers work on restricted texts with a limited vocabulary of approved words. To keep word lists in a manageable size there is usually a rule which allows the use of product names even if they are not explicitly listed in the controlled language dictionary. There is no generic dictionary for anybody using controlled languages, instead every industry or even every company will need their own dictionary.

In the context of this thesis the interesting question is to what extent a style and grammar checker for a natural language can be used to check controlled language documents. Typical restrictions for controlled languages might look like this:

Lexical restrictions: for example, a rule might say: use try only as a verb, not as a noun. This implies a semantic restriction, but since it is expressed as a syntactic restriction it can be found with a rule which triggers an error message when it encounters try as a noun.

Grammar restrictions: for example, rule 5.4 of AECMA says: In an instruction, write the verb in the imperative (”commanding”) form. Two example sentences are given in the AECMA documentation:

Approved: Remove oil and grease with a degreasing agent.

Not approved: Oil and grease are to be removed with a degreasing agent.

This might be implemented in a grammar checker by rules which forbid the use of phrases like

is to and are to followed by the passive of verbs like remove, continue, set.

Semantic restrictions: for example, a rule might say: use noise only with its meaning unwanted sound, not as electronic interference (example taken from [Holmback et al, 2000, p. 125]). A style and grammar checker can only discover wrong usages like this if it has an integrated word disambiguation component. Alternatively, the checker can simply warn on every occurrence of noise, no matter of its meaning in the given context. This might annoy users if too many warnings are unjustified. If this specific test can be turned off, however, the warning might also be perceived as a handy reminder.

Style restrictions: a rule might demand keeping sentences shorter than 20 words and to stick to one topic per sentence. The length restriction is relatively easy to check, even without a grammar checker. It is enough to check the whitespace-delimited terms in a sentence. There may be special cases like words with hyphens where it is not clear if they should be counted as one or as two words. However, this is not important as the exact number of maximum words does not matter that much – the message is: keep your sentences short. The remaining question is where a sentence starts and where it ends. Assuming that controlled language is mostly used for technical documentation and that this documentation is marked up using XML nowadays it should be easy to distinguish the different periods. For example, an algorithm might assume that a sentence ends whenever a period occurs, unless it is marked up as an abbreviation like

<abbr>etc.</abbr>.

So it seems to be possible to implement many controlled language restrictions with a style and gram- mar checker for a natural language. Considering that a controlled language is a subset of a natural language, this is not surprising. Still it makes sense to develop checkers specialized for controlled language checking. The very idea of controlled languages is to use simple grammar, maybe so simple that it is possible to completely analyze each sentence automatically. With this analysis it is possible

to implement more complicated restrictions than with a rule-based checker which works on shallow analyzed text.

2.5 Style Checking

As it was mentioned, natural language grammar provides a clear distinction between sentences which are correct and those which are not. This is not the case for language style. Every writer will have to make his own decision on what style is preferred in which context. This might easily lead to an incoherent style when a document is collaboratively worked on by several people. Style checking is especially useful for these situations, as it reminds writers on a style which was decided on before. Restrictions and rules about style might cover a broad range of things:

Choice of words: The use of foreign words might be disapproved because not everybody un- derstands them easily. The style checker can then suggest a non-foreign replacement with the same or a very similar meaning. For example, terra incognita might be replaced by unknown territory. Similarly, it might suggest replacing words which are very uncommon by common synonyms, even if the uncommon word is not a foreign word. For example, the noun automo- bile, which occurs 230 times in the BNC might be replaced with the noun car, which occurs about 27,000 times in the BNC5.

Simplicity: The length of a sentence might be limited, leading to simpler sentence structures which are easier to understand.

Punctuation: After a punctuation mark a space character is inserted, but no space is inserted before a punctuation mark (except the opening quote character and the opening parenthesis).

Dates, times and numbers should be written in a certain format. For example, a date like 5/7/1999 is ambiguous to many people and it should probably be replaced by 1999-07-05 or 1999-05-076.

Contracted forms like don’t might be disapproved and could be replaced by do not.

Texts which repeat a given noun too often might sound strange. Instead, pronouns or synonyms should be used to refer to the noun. This rule clearly shows how much ”good” style depends on the kind of document: using synonyms to make texts sound better is a common means used by journalists and it is taught in schools. In technical documentation, however, being clear and unambiguous has a higher priority.

Often the important aspect of style is not that some specific rule is chosen, but that one style – no matter which one – is used throughout the document or even throughout a collection of documents. For example, using different date formats in one document is confusing for the reader and looks unprofessional.

2.6 False Friends

A ”false friend” is a pair of words from two different languages where both words are similarly written or pronounced but carry a different meaning. Because the word sounds so familiar a learner of the language will be inclined to use the word incorrectly. Here are some English/German word pairs which are false friends:

become – bekommen (become means werden) actual – aktuell (actual means tatsächlich) bald – bald (bald means glatzköpfig)

It is noteworthy that these words are just as misleading for native speakers of German who learn English as they are for native speakers of English who learn German. As languages often have common roots, the problem is sometimes not just between two languages. For example, sensible (en)

/ sensibel (de) is a false friend, as is sensible (en) / sensible (fr). But sensibel (de) / sensible (fr) is not a false friend, as the meaning is the same in German and French. This is shown in the following diagram. The arrows mark a false friend relation:

Abbildung in dieser Leseprobe nicht enthalten

meaning1: showing reason

meaning 2: having emotional sensibility

False friends are of interest in the context of this thesis because they can easily be found with a rule- based style and grammar checker. It is just as easy to suggest an alternative meaning to the user. Once the user is aware of the problem, he can turn off this specific rule so he will not get further warnings about this word – all other false friends will still show warnings until they are turned off, too.

2.7 Evaluation with Corpora

A grammar checker system can be evaluated with two common measures known from information re- trieval: precision and recall. These are values between 0 and 1 (sometimes expressed as a percentage) which are defined as follows:

Abbildung in dieser Leseprobe nicht enthalten

In other words, precision measures how many of the sentences flagged as incorrect by the software are indeed erroneous.

Abbildung in dieser Leseprobe nicht enthalten

Recall measures how many of the errors in a text are found, i.e. how complete the software is. Obviously a system with precision = 1 and recall = 1 can be considered to work ”perfectly”. In practice, there is usually a tradeoff between both.

POS taggers are usually written to be robust, i.e. they cannot miss a word like a grammar checker can miss an error. Instead the tagger can return more than one tag per word, so precision is sometimes measured by the average number of tags per word. Then, recall is the probability that one of the returned tags is correct. As one can see, it is easy to develop a tagger with recall 1, it just needs to return all possible tags of a word – which obviously leads to the worst precision possible.

To develop and evaluate a style and grammar checker, one needs a corpus of unedited text, as this kind of text reflects the common input to a style and grammar checker. All errors (except for spelling errors) need to be marked up in this corpus. As such a corpus is not available, one can also work with two corpora: one corpus – which should be free of errors – is used to optimize the precision so that the system does not give false alarm too often. The other corpus – which contains only sentences with grammar errors – is used to optimize the recall so that many errors are found. Nevertheless, a significant recall/precision value must be measured with the single corpus, everything else would be too biased.

Some corpora which can be used for certain aspects of the grammar checker development will now be described.

2.7.1 British National Corpus

The British National Corpus [BNC] is a commercial corpus of British English which contains 100 million words. It was built from 1991 to 1994, so it contains texts from before 1991, but not from after 1994. About 90% of the words stem from written language, the rest stems from spoken language. The corpus contains a broad range of text types, e.g. fictional texts, technical documentation, newspaper articles etc. Because the texts are copyrighted, only parts of them (start, middle, or end) are part of the BNC.

All the BNC’s texts are SGML-formatted. The markup consists of the usual meta information like author, headline and date, and it encloses paragraphs, sentences and words. Each word is assigned one of 61 tags (see section A.4). Some collocations are taken as a single word, e.g. up to is tagged as one preposition.

As the part of the BNC which is based on written language comes mostly from published sources, one can assume that it contains very few grammar errors. Style is, as mentioned before, mostly a matter of definition, so one cannot make statements about the presumable number of ”style errors” in the BNC.

The BNC itself may not be given to people who do not have a BNC license, but its license explicitly has no restrictions on the use of the results of research with the BNC. So when developing a software system, one may use the BNC during development, but neither the BNC nor parts of it may be part of the resulting software.

Technically, the BNC comes as a set of compressed SGML data files and with software to query the data. However, this software has several drawbacks: it is a client server/system, and as the server (sarad) only works on Unix/Linux and the client (SARA) only works on Windows, it requires two computers even if there is only one user and there is no actual need for network access to the server. The Unix/Linux version of the server only comes with a very simple command line tool (solve) for querying. Furthermore, the server and this query tool are difficult to set up compared to today’s standards. The installation instructions on the web are partially out of date7.

The BNC can also be queried on the BNC web page at http://sara.natcorp.ox.ac.uk/ lookup.html, even without a BNC license. The query language makes it possible, for example, to search for words which occur in a given part of speech and to use regular expressions. Unfortunately, the search facility is rather limited in other respects: a search for only POS tags without a specified word is not possible. A query like dog, which results in 7800 matches, might take 40 seconds, but only the first 50 matches are shown and there is no way to get the remaining matches. No POS annotations are displayed, and the matched words themselves are not highlighted, which makes scanning the result more difficult. During my tests, there where also several timeout and server errors so that some

queries were only answered after several tries or not at all.

As an alternative query system a software called [BNCweb] is available. It works web-based and it requires a running BNC server and a MySQL database. According to its description on the web it seems to be feature-rich, but it is not available for free.

2.7.2 Mailing List Error Corpus

So far no corpus specialized in grammar errors is publicly available8 . Because of the importance of an error corpus for optimizing the style and grammar checker, a new corpus was developed.

The corpus contains 224 errors found mainly on international public mailing lists used for Open Source software development9 . Most messages discuss technical issues like programming or the software development process. Many of the writers on those mailing lists are not native speakers of English. However, as the native language of people is often not obvious, this information has not been recorded. Some sentences have been shortened when it was clear that the error is completely unrelated to the part which has been removed. Other sentences have been slightly edited, e.g. to fix spelling errors which are not part of the grammar error. The distribution of errors is shown in the following table:

Abbildung in dieser Leseprobe nicht enthalten

The corpus data is biased in so far as only errors are listed which I noticed during normal reading of messages. In other words, no systematical search was done, and only those errors are listed which I could intuitively recognize as such. A more sophisticated approach to building an error corpus is listed in [Becker et al, 1999]. The complete corpus and a description of its XML file format can be found under section A.1.

2.7.3 Internet Search Engines

Finally, it is sometimes practical to use Internet search engines for finding given words or phrases. For example, the error corpus contains the following part of a sentence:

*But even if it’s looking fi ne, the is the problem that...

In this fragment, the is is obviously an erroneous phrase. One might consider a rule which suggests there is as a replacement, whenever the is occurs. It is necessary to test if the is is really always incorrect. I will limit this chapter to Google, because it is the search engine which covers the largest

number of pages. At the time of writing, the Google homepage claimed: ”Searching 3,083,324,652 web pages”.

The query ”the is” (note that the quotes are part of the query) used on Google returns 519,000 matches. Usually Google ignores very common words like the and who, but not so in phrase queries, which are carried out when the search terms are surrounded by quotes. Some of the matches are:

1. What the !@#$ is this?
2. J e f Raskin - THE Is Not An Editor... So What Is It?
3. About the IS Associates
4. The Is Ought Problem
5. What the is this site for?

Here one can see several reasons why the is might occur in a text: Match 1 contains a sequence of special characters which Google does not consider a word, so it pretends the and is are subsequent words here. Matches 2, 3 and 4 only match because Google’s queries are always interpreted case- insensitive. THE and IS are names (probably acronyms) here and would not have been matched with a case-sensitive search. Finally, match 5 is an actual error, so the rule which says that the is is always wrong works correctly in this case. Unfortunately the huge number of matches prevents manual checks for each match, so it is not possible to use this result to prove that the rule is always correct. Still the few matches which have been checked are a good indication that the rule is useful.

Obviously Google is not useful as a replacement for a real corpus like the BNC. Firstly, Google only knows about words, not about POS tags. Google does not even offer stemming, i.e. the term talks will be matched only when searching for talks, not when searching for talk or talking. Secondly, Google can be limited to search only pages in a given language, but it cannot be told to search only pages which have been written by native speakers of English. It also cannot limit its search to categories like ”technical documentation” or ”oral speech”10 . Thirdly, the Google database is constantly changing. The example given above might not be exactly reproducible anymore when you read this.

The advantage of Google is its size of some thousands of million pages. Even if just 30% of these pages are written in English, this is still much more than the BNC has to offer. Nonetheless Google is extremely fast, the ”the is” query returned its first ten matches in 1.49 seconds11 . Furthermore, Google is up-to-date and contains modern vocabulary, whereas the BNC collection ends in 1994. For example, the BNC only contains two occurrences of World Wide Web.

2.8 Related Projects

2.8.1 Ispell and Aspell

Ispell [Kuenning] and Aspell [Atkinson] are both very popular Open Source spell checkers. Most Open Source word processors make use of these programs in one way or another. For example, KWord provides an integrated interface to Ispell and Aspell. OpenOffice.org comes with MySpell, which is based on Ispell. In both cases, the user does not notice that it is Ispell/Aspell which does the real work in the background.

Spell checkers compare each word of a text to their large lists of words. If the word is not in their list, it is considered incorrect. In other words, spell checking is a very simple process which does not

know anything about grammar, style or context. It is mentioned here nonetheless, because it could be considered a subset of a complete grammar checker. The integration of a spell checker is described in section 3.4.

2.8.2 Style and Diction

Style and Diction are two classical Unix commands [Style/Diction]. Style takes a text file and calcu- lates several readability measures like the Flesch Index, the fog index, the Kincaid score and others. It also counts words, questions and long sentences (more than 30 words by default). It is not an interactive command, but it allows to specify options to print, for example, all sentences contain- ing nominalizations or sentences with passive voice. The complete matching sentences will then be printed, without further indication where exactly the match is.

Diction takes a text and searches for certain words and phrases, for which it prints a comment. For example, the following text used as input for diction:

I thought you might find it interesting/useful, so I’m sending it to the list here.

...will produce the following result:

I thought you [might -> (do not confuse with "may")] find it [interesting -> Avoid using "interesting" when introducing something. Simply introduce it.] /useful, [so -> (do not use as intensifier)] I’m sending it to the list here.

As one can see at so, diction warns for every occurrence of certain words and gives a short statement about possible problems with the given phrase. Here are some more samples from the diction data file:

Abbildung in dieser Leseprobe nicht enthalten

The diction data file for English contains 681 entries, its German data file contains 62 entries. Except very few hardcoded exceptions like data is / data are, diction does not check grammar.

2.8.3 EasyEnglish

EasyEnglish is a grammar checker developed at IBM especially for non-native speakers. It is based on the English Slot Grammar. It finds errors by ”exploring the parse tree expressed as a network” [Bernth, 2000]. The errors seem to be formalized as patterns that match the parse tree. Unfortunately [Bernth, 2000] does not explain what exactly happens if a sentence cannot be parsed and thus no complete tree can be built.

EasyEnglish can find wrong articles for countable/uncountable nouns (e.g. *an evidence) and missing subject-verb agreement amongst other mistakes. It has special rules for native speakers of Germanic languages and for native speakers of Japanese. For speakers of Germanic languages it checks for overuse of the progressive form (e.g. *I’m having children), wrong complement (e.g. *It allows to update the file instead of ...allows updating...) and false friends. There are other rules useful especially for speakers of Japanese. All rules are optional.

EasyEnglish does not seem to be publicly available, neither for free nor as a commercial tool.

2.8.4 Critique

Critique is a style and grammar checker that uses a broad-coverage grammar, the PLNLP English Grammar [Jensen et al, 1993, chapter 6]. It detects 25 different grammar errors from the following five categories: subject-verb agreement, wrong pronoun case (e.g. *between you and I instead of between you and me), wrong verb form (e.g. *seem to been instead of seem to be), punctuation, and confusion of similar words (e.g. you’re and your). Furthermore, Critique detects 85 different style weaknesses, like sentences that are too long, excessive complexity of noun phrase pre-modifiers, and inappropriate wording (e.g. short forms like don’t in formal texts).

Errors are shown to the user with the correct replacement if possible. The user can get an explanation of the problem, which is especially important for style issues for which the distinction between correct and incorrect is often not that simple. Critique can also display the parser’s result as a tree. All style and grammar checks can be turned on or off independently.

For each sentence, Critique first tries to analyze the complete sentence with its parser. If the parsing succeeds, the sentence is grammatically correct and it is then checked for style errors. The style errors are found by rules that work on the parsed text. If the parsing does not succeed on the first run, some rules are relaxed. If the sentence can then for example be parsed with a relaxed subject-verb agreement rule, a subject-verb agreement error is assumed. The style checking will then take place as usual.

Interestingly, the authors of [Jensen et al, 1993, chapter 6] suggest that ideally the parser should parse any sentences, even the incorrect ones. This way all grammar and style checking could be done with the same component, namely rules that work on a (possible partial) parse tree. This however is not possible, as a grammar with few constraints will lead to many different result trees and will thus become very slow. Still, some rules have been changed from being a condition in the grammar checker to being a style rule.

Critique does not seem to be publicly available.

2.8.5 CLAWS as a Grammar Checker

CLAWS (Constituent Likelihood Automatic Word-tagging System, [CLAWS]) is a probabilistic part- of-speech tagger. [Attwell, 1987] offers a suggestion on how to use such a tagger as a grammar checker. The idea is to get the POS tags of a word and its neighbors and then check how common these sequences are. If the most common combination of tags is still below a given threshold, an error is assumed. Possible corrections can then be built by substituting the incorrect word with similarly spelled words. The substituted word that leads to the most common POS tag sequence can then be suggested as a correction.

The advantage of this probabilistic approach is that it does not require hand-written rules. It can even detect errors which were not in the corpus originally used for training CLAWS. On the other hand, a threshold needs to be found that works best for a given text. Also, only errors which are reflected in the POS tags can be found. For example, sight and site are both nouns and thus this kind of probabilistic checker will not detect a confusion between them. Also, the error message cannot be accompanied by a helpful explanation.

CLAWS is available for a fee of £750. An online demo is available on its web site. The extension for detecting grammar errors does not seem to be available.

2.8.6 GramCheck

GramCheck is a style and grammar checker for Spanish and Greek, optimized for native speakers of those languages [R.-Bustamente, 1996, R.-Bustamente, 1996-02]. It is based on ALEP (Advanced

Language Engineering Platform, [Sedlock, 1992]), a development environment for linguistic appli- cations. ALEP offers a unification-based formalism and a graphical user interface that lets linguists develop new grammars.

GramCheck detects non-structural errors with a constraint relaxation technique. In its unification based grammar, Prolog code tries to unify features. The code also builds the corrections, depending on which relaxation was necessary to parse the input text. Structural errors are detected by error patterns which are associated with a correction pattern.

GramCheck can detect errors in number and gender agreement, and incorrect omission or addition of some prepositions. It can detect style problems like abusive use of passive and gerunds and weak- nesses in wording.

Neither GramCheck nor ALEP are publicly available.

2.8.7 Park et al’s Grammar Checker

[Park et al, 1997] describe a grammar checker which is optimized for students who learn English as a second language. Students’ essays have been analyzed to find typical errors.

The checker is based on a Combinatory Categorial Grammar implemented in Prolog. In addition to the broad-coverage rules, error rules have been added that are applied if all regular rules fail. One of the error rules might for example parse a sentence like *He leave the office because the subject-verb agreement is relaxed in that rule. All error rules return a short error message which is displayed to the user. No correction is offered, as this would degrade the learning effect. The user interface is a text field on a web page in which the text can be typed and submitted.

The checker detects missing sentence fragments, extra elements, agreement errors, wrong verb forms and wrong capitalization at the beginning of a sentence. It is not publicly available.

2.8.8 FLAG

FLAG (Flexible Language and Grammar Checking, [Bredenkamp et al, 2000, Crysmann]) is a plat- form for the development of user-specific language checking applications. It makes use of compo- nents for morphological analysis, part-of-speech tagging, chunking and topological parsing.

FLAG detects errors with so-called trigger rules indicating the existence of a potential problem in the text. For potentially incorrect sentences confirmation rules are called which carry out a more complicated analysis. There are confirmation rules which advise that there is actually no problem, and others that advise that there is indeed an error. Each rule carries a weight, and if more than one rule matches, these weights are added up. Based on the final weight, FLAG then decides whether the rule really matches and thus whether there is an error in the sentence.

This two-step approach helps increase the system’s speed, as the trigger rules are rather simple and can easily be checked, whereas the more complicated and thus slower confirmation rules only need to be called for potentially incorrect sentences.

All of FLAG’s rule are declarative. It also knows terminology rules and can generally work for any language. So far, only a few rules for German have been implemented. The addition of a significant number of grammar rules is only listed in the ”Future Work” section in [Bredenkamp et al, 2000].

FLAG is implemented in Java and C++ and has a graphical user interface for checking texts. FLAG is not publicly available.

[...]


1 The asterisk indicates an incorrect word or sentence.

2 The crossed out word is incorrect, the bold word is a correct replacement. This sentence fragment was found on the Web.

3 [Tufi s and Mason, 1998] refers to an older version that was implemented as a client/server system with the server written in C. The implementation is now completely in Java, but there is no evidence that the algorithm has changed, so the

4 Some of these examples are taken from http://ace.acadiau.ca/english/grammar/index.htm.

5 Of course one would have to search for word meanings, not just for words if one seriously wanted to interpret the results. In this very case the proportions are so clear that it does not seem to be necessary.

6yyyy-mm-dd is the ISO standard for dates, see http://www.cl.cam.ac.uk/~mgk25/iso-time.html.

7 http://www.hcu.ox.ac.uk/BNC/SARA/saraInstall.html. For example, the option to hand a confi gu- ration fi le to sarad is specifi ed as -c, whereas it must be -p.

8 Bill Wilson collected more than 300 ill-formed sentences, but most of the errors are just spelling errors: ftp: //ftp.cse.unsw.edu.au/pub/users/billw/ifidb. The Cambridge Learner Corpus contains more than 5 million words and errors are annotated, but the corpus is not publicly available: http://uk.cambridge.org/elt/ corpus/clc.htm

9 For example: kde-core-devel@kde.org, kfm-devel@kde.org, kmail@kde.org, dev@openoffi ce.org. Archives for the former ones are available at http://lists.kde.org/, for the latter one at http://www.openoffice.org/ servlets/SummarizeList?listName=dev.

10 Although one might try to come up with queries that fi nd only such documents, e.g. ”Al Gore speech”.

11 Google displays this number. Obviously it might take longer until the result appears on the user’s screen because of slow network connections etc.

Excerpt out of 76 pages

Details

Title
A Rule-Based Style and Grammar Checker
College
Bielefeld University
Author
Year
2003
Pages
76
Catalog Number
V108379
ISBN (eBook)
9783640065769
File size
992 KB
Language
English
Keywords
Rule-Based, Style, Grammar, Checker
Quote paper
Daniel Naber (Author), 2003, A Rule-Based Style and Grammar Checker, Munich, GRIN Verlag, https://www.grin.com/document/108379

Comments

  • No comments yet.
Look inside the ebook
Title: A Rule-Based Style and Grammar Checker



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free