0
$\begingroup$

Would the Turing test be better formulated by assessing whether the creator (as opposed to a third party) was able to tell the difference between their program and a human?

A magician, through the use of misdirection and slight of hand, could make an audience believe they are witnessing true magic, however the magician would only be able to make themselves believe if in fact the magic was real.

Would formulating the Turing test as such put a greater focus on creating “real” AI as opposed to the illusion of AI?

$\endgroup$

2 Answers 2

1
$\begingroup$

For the purpose of having AI on par with humans and releasing it to the market is not that relevant that the creator, a single person, can still not be fooled.

Moreover this would create a conflict of interests, as the creator may pretend to be fooled.

In conclusion: no.

$\endgroup$
0
$\begingroup$

Interesting twist on the classic. @Rexcirus makes a good point about the market, and indeed, the rest of the world. His 'no' is well-felt. To pile on a little, I would say that it is very easy for humans to fool themselves.

Fooling yourself is normally prevalent in youngsters, and tends to wear out as they get older and interact with people who say things like "you're fooling yourself". So in that sense, it may be more of a "learning process for developers." Although today there are many an LLM dev who seem reluctant to listen to that kind of feedback.

I would offer that your "personal Turing test" is a good starting point for an AI dev. In the sense that if their product cannot pass that test, then they can begin the difficult journey of understanding why. If they opt to skip that test, it's no big deal. They'll find the wall eventually!

Excellent point about magicians!

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.