Mass lexical comparison or mass comparison is a highly controversial method developed by the well-known linguist Joseph H. Greenberg to find genetic relationships among languages in the remote past, beyond the limits of the traditional comparative method, or in situations where there are too many languages to practically apply the latter without many generations of work.
Traditional historical linguistics
The comparative method
Since the development of comparative linguistics in the 19th century, a linguist who claims that two languages are related, in the absence of historical evidence, is expected to back up that claim by presenting general rules that describe the differences between their lexicons, morphologies, and grammars.
For instance, one could prove that Spanish is related to Italian by showing that many words of the former can be mapped to corresponding words of the latter by a relatively small set of replacement rules -- such as change initial es- by s-, final -os by -i, etc. Many similar correspondences exist between the grammars of the two languages. Since those systematic correspondences are extremely unlikely to be random coincidences, the most likely explanation by far is that the two languages have evolved from a single ancestral tongue (Latin, in this case).
Most pre-historical language groupings that are widely accepted today -- such as the Indo-European, Algonquian, and Bantu families -- have been proved in this way, although many -- such as Niger-Congo, and until quite recently Afro-Asiatic and Sino-Tibetan -- have not, and some families whose proponents claim to have proved them in this way (eg. Nostratic) have not been widely accepted.
Limitations of the comparative method
However, besides systematic changes, languages are also subject to random mutations (such as borrowings from other languages, irregular inflections, compounding, and abbreviation) that affect one word at a time, or small subsets of words. For example, Spanish perro (dog), which does not come from Latin, cannot be rule-mapped to its Italian equivalent cane (the Spanish word can would be the Latin-derived equivalent but is much less used in everyday conversations, being reserved for more formal purposes).
As those sporadic changes accumulate, they will increasingly obscure the systematic ones -- just as enough dirt and scratches on a photograph will eventually make the face unrecognizable. Presumably for this reason, the comparative method has not been able to provide reliable evidence of genetic relationship between languages that have split off more than 10,000 years ago. Considering that humans probably have been speaking fully developed languages since at least 60,000 years ago (when Australia was first populated), it is hardly surprising that many languages and language families still have no known relationship with other groups.
Mass lexical comparison
In an effort to extend comparative linguistics beyond its present limits, and arrive at his broad super-family groupings, Greenberg invented a new statistical method, mass lexical comparison. In this method, one simply compares a large sample of words from one language A with its equivalents in the other language B, looking for similar sound patterns. Thus, for example, Spanish cabeza and Italian capo are similar to the extent that both contain the same consonant sound [k], similar vowel sounds [a], and similar consonants [b], [p], in the same sequence.
Departing from the traditional criterion, Greenberg did not look for any systematic trend in these similarities, trusting that a sufficiently large percentage S(A,B) of sufficiently similar pairs among the samples would be enough to prove a common origin for the two languages. This assumption is valid in principle, because S is expected to be higher for languages that have split off more recently, and to decrease as the split recedes into the past. The chief difficulty lies is deciding what constitutes "sufficient" similarity, particularly bearing in mind that many similarities are due to borrowing between languages and, far more commonly than is often realised, to coincidence.
From similarity to phylogeny
Assuming that the similarity measures are statistically significant, they can be used to decide the branching order of the languages on their presumed genetic tree. That is, if the computed similarity S(A,B) is greater than S(A,C) and S(B,C), one can take it as indication that A and B separated from C before separating from each other. In other words, there is a single branch of the tree that includes A and B but not C.
Greenberg also observed that, just from statistical principles, the computed similarity between the lexicons of two sets of closely related languages would be more reliable than that computed from two languages alone. (This justifies the "mass" in the method's name.)
Thus, paradoxically, the lexical comparison method becomes more accurate as the investigation recedes into the past -- which offsets to some extent the increased level of statistical noise in the measurements. This stands in contrast to the traditional comparative method, which becomes more unreliable as it is applied to broader language groups -- since the structural comparisons must be applied to increasingly dubious, inaccurate, and incomplete reconstructed proto-languages.
The mass lexical comparison method also has the advantage that it can reconstruct the broad phylogeny for a large set of languages directly from raw lexical samples, without the need to wait for detailed morphological studies of each language or the reconstruction of proto-languages for each branch -- which in the case of Native American languages, for example, would take an enormous amount of work.
Choosing the sample lexicon
Ideally, the sample lexicons should contain only words that are likely to have survived in either language since the time of their hypothetical common origin, and are unlikely to be replaced by borrowed or reinvented words. For studies that extend more than 5000 years into the past, that criterion leaves only a few hundred concepts � such as body parts, close family relations, common animals and plants, water, fire, sky, stone, spear, etc..
Words for "modern" concepts -- such as "wine", "horse", and "steel" -- may show spurious similarities between unrelated languages, due to the name being imported by a culture together with the thing; e.g. Spanish pan and Japanese pan ("bread"). Alternatively, the names of recently imported concepts may get invented separately in related languages, such as computadora ("computer") in Latin American Spanish and ordinateur in French. Either way, such words would only add noise and bias to the comparison.
Weaknesses of the method
Significance of the similarity
In theory, the reliability of Greenberg's method could be settled by statistical analysis; namely, by computing the probability that a given similarity level S could have arisen by chance coincidences between totally unrelated languages. Two languages then should be considered similar only if the observed value of S was significantly greater than this "baseline" level.
Unfortunately, this computation is very difficult to do. For one thing, the similarity level S is expected to depend on the phonetic repertoires of the two languages; thus, for instance, one expects more chance resemblances between two languages that have few vowels and many consonants, than between a vowel-rich and a vowel-poor language. Similar biases can be expected when comparing languages that allow consonant clusters with those that don't, or polysyllabic languages with monosyllabic ones. It follows that deciding what would be a significant level of similarity would require a stochastic model for a "random lexicon" that took into account letter frequencies, syllable structure, and many other similar statistics.
At the same time, the correspondences used in the method are often tenuous, to say the least, requiring at times a correspondence of only one phoneme, or even only one characteristic (labial, dental, etc.). A wide semantic range is also allowed; for example, words were compared by Greenberg, in his book on the American languages, meaning arm, shoulder, armpit, forearm, elbow, etc. Thus, using this method, Lyle Campbell, a linguist specializing in the languages of the Americas and author of a review of Greenberg's book, was able to establish a correspondence between the proposed Amerind language and Finnish, and others were able to do so with Latin and many languages obviously not related to those of the Americas.
Also, some of the "ancient" concepts that are most suitable for inclusion in the sample lexicons may have been originally denoted by onomatopoeic words that imitate a natural sound associated with the concept. (Examples of originally onomatopeic words in English include such words as "crack", "crow", "cough", "gurgle", etc.). The independent use of this principle in two languages will tend to create similar word pairs, that contribute to the similarity measure S but are not due to common origin.
Ideally, such words ought to be excluded from the sample lexicon; but the onomatopeic origin of a word may be hard to recognize in its present form. Even basic words like "milk" or "wind" have been claimed to reflect the corresponding sounds (those of sucking and blowing, respectively). Unfortunately, the impact of these "natural false cognates" in the similarity measure is hard to estimate.
Semantic drift and subjectivity
Finally, in every language the same concept can often be expressed by two or more different words; and the meanings of words are known to drift over centuries just as much as their forms. Thus, for example, the meanings of corn and grain in English overlap to a large extent; and corn, which originally referred to cereals like wheat and barley, has come to mean chiefly "maize" in the United States.
As a consequence of these semantic shifts and synonymies, the construction of the representative lexicon for a language typically involves many choices that must often be made on subjective criteria. These choices may be unconsciously biased towards words that are similar to those previously chosen for other languages, thus artificially inflating the similarity measure S. Unfortunately, the impact of this factor, too, is hard to quantify.
Proponents of mass lexical comparison defend the technique on the grounds that Greenberg used it successfully in his classification of the languages of Africa. To this critics respond that even if this is true, it is not a very good batting average since Greenberg's other claims have not been accepted: in addition to the well-known rejection of Greenberg's Amerind family, they cite the failure of Greenberg's Indo-Pacific hypothesis and the fact that although Greenberg claimed to have shown that all of the aboriginal languages of Australia are related (except for the languages of Tasmania, which he considered to belong to Indo-Pacific), this position is not considered tenable by experts on Australian languages. A second point is that his success in Africa was due largely to the fact that much of the previous work was very bad, involving such gross errors as the classification of languages on the basis of whether their speakers herded cattle, and that his classification was highly derivative of other work, particularly that of Westermann.
Finally, Greenberg's African classification is by no means the success that it is made out to be. Specialists have grave doubts about the unity of both Nilo-Saharan and Khoi-San, two of Greenberg's four families. The third, Afro-Asiatic, was taken over wholesale from previous work, not proposed anew by Greenberg. A number of languages included by Greenberg in his four families are considered isolates today. Furthermore, current classifications differ in major respects in the subgrouping of the major language families. In sum, although Greenberg's African classification represented an advance on the classification well known at the time, it was neither as successful nor as original as Greenberg and his advocates suggest and does not serve as a good model for evaluating the use of his approach.
A further consideration is that, insofar as mass lexical comparison is a legitimate scientific method, it must work when applied by others, not just Joseph Greenberg. In fact, it has been used by many others, with no discernible difference in application, and has produced results that are either not accepted or are considered to be clearly wrong. Among the examples that we may cite are cases of languages being wrongly classified as Indo-European discussed by Poser and Campbell (1992). To take another example, virtually no one today accepts the proposal by Radin (1919) that all of the languages of North America are related.
Although mass lexical comparison has a few ardent proponents among linguists and somewhat greater acceptance among non-linguists, it is rejected by most historical linguists, who view the comparative method as the only legitimate way to establish pre-historical common ancestry for language.