Introduction
In linguistics, as in many fields, great advances are not always recognized as such; the key to entirely new ways of conceptualizing a field may be hidden away in the paragraphs of an article which, while quite evidently solid in its own right, does not draw explicit attention to the groundbreaking implications of one of its supporting arguments. This principle, I will argue, is admirably illustrated by Hoekstra and Kooij's (1988) article on the Innateness Hypothesis. After a lengthy discussion of apparent violations of the subjacency condition, and various ways of explaining the violations away, the authors state (39)
There are more apparent violations of subjacency, and to the extent that the condition does make the correct predictions, then, it is hard to see how it could be learned as such. It is reasonable, therefore, to assume that subjacency is part of UG.While these lines are buried within a paragraph serving only to bridge the discussion of subjacency to a discussion of parameters, we should not overlook their revolutionary import, for they lead inexorably to a basic axiom heretofore unacknowledged in the annals of linguistic theory:
The less evidence there is for a principle, the more likely it is that the principle is part of Universal Grammar.This assertion, which I shall henceforth refer to as the Less Is More Axiom, or LIMA, opens up entirely new realms of linguistic research, for it in turn leads to the following postulate, which I shall call the Axiom of Infinitely Null Theory, or AINT:
Since there is an infinite number of principles which account for the surface data none of the time, there is an infinite number of principles in UG.The consequences of this breakthrough for linguistic theory cannot be overstated; at the very least, it renders the possible set of articles on UG principles an unbounded set (though, of course, a well-formed one). The remainder of this paper will explore but one such principle, as an exercise in illustration: the Whankydoodle Constraint.
A New Limitation on Grammars
The Whankydoodle Constraint (henceforth WC) requires that all sentences in strict-syntax contain the form 'whankydoodle' directly dominated by CP. Reinstating the form (reasons for its non-occurrence will be discussed below) would thus lead to sentences such as the following:
1 |
a. Whankydoodle I think that is a ferret. b. We decided whankydoodle that it was a ferret. c. Has whankydoodle he finished loading the catapult yet? |
Even to a layperson, these sentences are transparently bad, so the LIMA can be considered to be fully validated. For the analyst, the only tasks remaining are (1) accounting for the non-occurrence of whankydoodle in all other English sentences, and (2) accounting for the author's ability to include it in 1a-c.
Dealing with the lack of attested examples of CP-dominated whankydoodle is relatively simple. A second principle, The KP (kludge phrase) Superposition Constraint (cf. Weaselflinger, forthcoming), layers a second structural description onto every sentence, with the two structural frameworks co-indexing at the phonological interface. In this case, we need only posit that CP-dominated whankydoodle migrates to KP, using the same kind of mechanism as, doubtless, an uncountably large number of other elements that likewise don't occur. The facility and frequency with which KP Superposition accomplishes this for an infinite number of elements while remaining undetectable only strengthens its status as a universal element of phrase structure.
Accounting for the presence of the whankydoodle element in 1a-c is somewhat more complex, but certainly not egregiously so. A second element, [ ]1, is prefixed to whankydoodle, and blocks KP migration via preferential saturation. It is likewise needed for discussion of a wide range of elements, and can thus be said to be independently motivated.
Conclusion
The value of LIMA, AINT, and the WC is obvious: grammars which are governed by these constraints are more limited, and hence more learnable, than grammars without the constraints. Further, a grammar with infinite constraints should be infinitely learnable. The shackles of observability have been forever shattered by LIMA, and linguistic theory can enter an entirely new domain of empiricism.
Hoekstra, Teun, and Kooij, Jan G. 1988. The innateness hypothesis. In: Explaining Language Universals, ed. by John A. Hawkins. Cambridge: Basil Blackwell, pp. 31-55.
Weaselflinger, Bjorn-Bob. Forthcoming. KP-superposition indexing and raising to Fnib. To appear in: OINK Occasional Papers in the Arts, Sciences, and Miscellany, vol. 16.