This paper presents an account of the statistical patterns in thedevelopment of 'do' forms in various sentence types in English. Unlikeprevious works on the rise of 'do'-support, it takes into account theevolution of 'do'-support in imperatives. We show that the development of'do' forms in negative imperatives cannot be explained with a clausestructure that has only one INFL projection and one NegP, as in Roberts (1985)and Kroch (1989b). We therefore propose a more articulated clause structure,which we argue is already necessary to explain the syntax of Middle Englishinfinitivals. We argue that the syntax of negative infinitivals in MiddleEnglish can be accounted for if we posit two possible syntactic positions fornegation and an intermediate functional projection, which we assume to be anAspect Phrase (AspP), between the two negation projections. This articulatedstructure enables us to distinguish two types of verb movement: movementover the lower negation and movement over the higher negation. We show that thepatterns in the development of do-support in imperatives as well as inquestions and negative declaratives can be explained if the loss of verbmovement occurs in two steps in the history of English with the loss of thehigher movement preceding the loss of the lower movement.
In this paper we present evidence from various linguistic changes, mostsignificantly the rise of the periphrastic auxiliary in early ModernEnglish, that the time course of syntactic change is tightly constrained by thegrammar of the changing language. Specifically, we give evidence that when onegrammatical option replaces another with which it is in competition across aset of linguistic contexts, the rate of replacement, properly measured, is thesame in all of them. This effect we call the ``Constant Rate Hypothesis.''The contexts generally differ from one another at each period in the degree towhich they favor the spreading form, but they do not differ in the rate atwhich the form spreads. This result is surprising since one might haveexpected the change to proceed faster in contexts where the advancing form ismore common. Indeed, Bailey (1973), in developing his theory of languagechange, assumes that different rates must characterize different contexts, ashave other scholars. We have, however, found quantitative evidence in severalcases of syntactic change, which we present in the paper, for the Constant RateHypothesis. In addition, our results show that the grammatical analysis whichdefines the contexts of a change is quite abstract. We find that the set ofcontexts that change together is not defined by the sharing of a surfaceproperty, like the appearance of a particular word or morpheme, but rather by ashared syntactic structure, whose existence can only be the product of anabstract grammatical analysis on the part of speakers. Indeed, in some of thecases we discuss, the competition reflected in the changes under study occursbetween entire grammatical subsystems. These competing subsystems have beenproposed by syntacticians, on the basis of synchronic analyses, to characterizeearlier and later stages of the languages in question, so that the results ofour investigation of process turn out to be consistent with independentlymotivated structural analyses. In our central case, the rise of periphrastic the richness of the available database (Ellegård's well-knownstudy) allows us to see in detail the shaping of the process of change by thegrammatical systems in competition.
It is possible with a Tree Adjoining Grammar to reproduce many of thesyntactic analyses originally formulated by linguists in transformationalterms. To the extent that these analyses are well-motivated empirically, thisfact makes TAG interesting for use in developing computational learning andprocessing models, since the use of other non-transformational formalismssometimes forces choices of linguistic description different from thoseordinarily made by descriptive syntacticians. Thus, using TAG, one can takeadvantage in the construction of parsers and learners of the computationaltractability of a mathematically restrictive formalism without having toreinvent empirical syntax in order to do so. At the same time, TAG analysesare not identical in every detail to their transformational counterparts; andit is interesting to compare them where they diverge. The differences arisebecause of a fundamental difference in the way that syntactic recursion istreated in the two frameworks. In TAG, recursive structures are generated bycomposing elementary syntactic objects, with the result that recursion isfactored apart from the representation of local syntactic dependencies. Bycontrast, in transformational grammar, as in many other frameworks, recursivestructure and local dependencies are represented together in a single, fullrepresentation of a complex sentence. Because of this difference in thetreatment of recursion, it often turns out, when TAG is used to emulate atransformational analysis, that the TAG version has advantages, of bothelegance and empirical coverage, over the original. This paper is ademonstration in a new empirical domain, that of nominal constructions, of theadvantages of TAG-based syntax. By presenting a linguistically detailedaccount of these constructions and showing the advantages of using TAG toanalyze them, we strengthen the case for the use of mathematically constrainedand computationally tractable representational systems in competence-based aswell as in computational linguistics.
The Abstract helps readers decide whether they want to read the rest of the paper, or it may be the only part they can obtain via electronic literature searches or in published abstracts.
The length of your Abstract should be kept to about 200-300 words maximum (a typical standard length for journals.) Limit your statements concerning each segment of the paper (i.e.
Because on-line search databases typically contain only abstracts, it isvital to write a complete but concise description of your work to enticepotential readers into obtaining a copy of the full paper. This articledescribes how to write a good computer architecture abstract for bothconference and journal papers. Writers should follow a checklist consisting of:motivation, problem statement, approach, results, and conclusions. Followingthis checklist should increase the chance of people taking the time to obtainand read your complete paper.
An abstract must be a fully self-contained, capsule description of thepaper. It can't assume (or attempt to provoke) the reader into flipping throughlooking for an explanation of what is meant by some vague statement. It mustmake sense all by itself. Some points to consider include: