Tumgik
#it works far less in priory imo
max1461 · 2 years
Note
This is up your alley I feel, how do you feel about computational methods for reconstruction? Particularly in the context of attempting to solve chronology.
This is right up my alley—in fact, quantitative historical linguistics is sort of the main thing I want to do. However, I am still in the very early stages of learning about it, so sadly my answer will not be nearly as informed as I would like. I will come back to this question in one year's time, when hopefully I can give you a waaaaay better answer.
My impression as of now, though, is that computational methods in reconstruction have some promise, but currently the field has a lot of problems. At a very base level, most of the algorithms people are using are just ripped straight from biology. But language change doesn't work like genetic change, it is (thankfully) in some sense far more constrained, due to the systematic nature of sound change. I think the very base level of what has to happen for computational methods to be useful in historical linguistics is for people to work on leveraging that language-specific structure in their algorithms. After all, the comparative method relies on the neogrammarian hypothesis in order to work. If the algorithms people are using don't even make reference to the neogrammarian assumptions, there's a certain sense in which it's a priori impossible for them to do language comparison! It's like trying to prove a theorem without making reference to the hypotheses, it can't be done almost by definition. So that's not a great sign. But IMO it's a surmountable problem.
The deeper concern I have is with cognate identification. Human researchers don't identify cognates on the basis of structural similarity alone, they do it based in part on semantics and in part on eliminating other possible etymologies. With a human's help, a computer can do the first part. But a computer definitely can't do the second part, at least not right now. So one has to hope that there's enough information embedded in the structure of the language itself, plus maybe some semantic maps of some kind that we give the computer, for it to be able to figure out what corresponds with what. If language-external sources of cognate identification are doing too much of the legwork, a computer just won't be able to do it.
But that's all about using computers to identify sound changes and cognate sets. I'm not sure what you mean exactly by chronology? If you mean like, dating (of sound changes or of splits in a family, etc.), then I'm more skeptical. But I've thought about it less. It all comes down to the fact that we simply don't have any good way of quantifying "speed of language change" to begin with. If by chronology you mean like, the order in which different languages split off, then I think possibly the answer is yes. If the issues previously mentioned can be worked out, computers should be able to give us different potential family tree structures with different probability estimates, and so on. They could be very useful for this sort of thing.
And, really, when I say "computers", I mostly just mean applying tools from probability theory. The fact that it's done by computers is sort of just a practical thing, because computers can compare many different possible combinations of cognates/sound changes/etc. very fast. But I think the real source of new insight, if this general avenue of research ends up being fruitful, will not be from the computerization itself as much as it will be from taking seriously the probabilistic assumptions that are used implicitly in historical linguistics already, and sussing out their full consequences.
19 notes · View notes
theteaisaddictive · 2 years
Text
the racism in this book is a hard fucking read, and knowing that it’s all what white people actually thought and believed makes it even worse
17 notes · View notes