Turing tests are conducted in writing. To pass, a machine must be able to communicate persuasively in writing. This means that it must be able to generate language that makes sense, is sufficiently formal or informal, uses appropriate turns of phrase, etc. It must then be able to fabricate a plausible history for itself. This is an extremely difficult task for an organic human intelligence, and at a certain level of complication, it is impossible to do off-hand.
So, is a Turing test a fair way to evaluate machine intelligence? It seems to require a greater capacity for storing, interpreting, and inventing to pass as human than it does to have a conversation of the same caliber as a human.
If the point is for a machine to pass as a human, does it even matter if it's a fair test?
What would it mean for humans if an artificial intelligence did pass a Turing test?