I’ve been dinking around the web looking at what conlangs people have been building. There are a lot of planned pidgins with designs something like this:
Inflected languages are hard, analytics languages are easy.
Massive borrowing of loan words is easy, apriori is hard.
Regular conjugations are good, irregularity is hard.
Ambiguity is bad, narrowly defined words are good.
Do these all stand up to nit picking?
Word order rules are just as tricky as rules governing inflection. An analytics language requires creating a mental model of a tree. Some times you branch left, sometimes you branch right. Sometimes you can shift a node on the tree and re-hang it on another node. Wouldn’t it be so much easier to give all the words affixes that encode the same information that word order encodes?
If massive borrowing of loan words makes it easier for two people to learn a language they probably share two closely related natural languages. Maybe one should learn the others. This is the same as dialect standardization, but on a broader scale. Massive borrowing of loan words provides no benefit what so ever to some one who speaks an entirely dissimilar language.
Irregular inflections are processed by the part of the brain that processes vocabulary, so we can distribute the processing of grammar to more parts of the brain if we use irregular conjugations, declensions, etc.
A lexicon with a low level of ambiguity will also be a very large lexicon. Words get memorized at a linear rate, so more words means more time to learn ‘em.