Is AI "friend" or "frenemy"?

In my trilogy of pet hates, “A.I.” is my second most detested jingo-ism.

This might seem somewhat “curmudgeonly”, as ‘intelligent’ systems are

  1. Flavour of the month
  2. A great attractor of inward investment
  3. Practically a ‘must-have’ in any service providers offering, but therein lies the problem! Almost all providers claim to have an “A.I.” capability, but it is usually undefined, provides little or no information on how it might have been “trained”, and definitely doesn’t provide any information on the algorithm structure (if there is one)

So why am I being so “grumpy” about this term?

Well, briefly, the current marketing imperative of having an “A.I. capability” means that anything from a scanner to an Excel macro can fall under such a definition, without fear of suffering a mis-representation suit! Additionally, when there is an algorithmic or machine learning element to the capability on offer, there is no transparency about the code, the training data or the source files. For anyone who can remember the buyer-side antipathy to proprietary, “black-box” methodologies, why are we insisting on repeating that mistake here?

While some find it easier to hide behind a “caveat emptor” approach, the legacy of our profession and our reputation for transparency and self-regulation leads me to say we MUST do better. Carlos Ochoa and Ezequiel Paura from Netquest were pioneering in this regard when they presented their PII algorithm at the ESOMAR Fusion conference in Dublin 2018, with a follow-up session presented a year later in Madrid by their colleague Anna Bellido. This was the first time an algorithm structure, code and learning platform was openly provided to a public audience. More recently, ESOMAR has finally instigated a global workgroup on determining standards for A.I. – based on the excellent work of Judith Passingham and Mike Cooke, published more than 18 months ago. This work – and the resultant definition of what can (and cannot) be classified as A.I. – will be essential to maintaining citizen confidence in our profession’s deployment of artificially intelligent systems.

But why is this essential, you might ask? For 2 primary reasons...

  1. Simply put, A.I. still gets lots wrong, and while it has the potential to change much of what our profession does today for the better, getting from here to there will require transparency and open curation to maintain trust and confidence in our sector. The ability for ChatGPT to tell “untruths”; the current actors and writers strike about protecting their image and voice rights; the difficulty facial recognition systems have with certain ethnicities and the extraordinary story of Microsoft’s TAJ are all manifestations of A.I. when it is insufficiently curated and goes wrong
  2. Secondly, for all the time, energy and funding that our profession may be investing in our versions of (good) A.I., there will be multiples of that level investment going into (criminal) systems designed to dupe us, defraud us and swindle us. We need to be able to show we are “white hats” in this regard

An educational TED talk, entitled “Why AI is incredibly Smart and Shockingly Stupid”, by Yejin Choi, is a great place to start to understand why we must work together to ensure that our professional cornerstones of rigour and quality are equally applied to this new methodology, as we have done with statistical research methodologies in the past: https://youtu.be/SvBR0OGT5VI

The advent of photographic and video-based social media platforms created an essential debate around “fake news” and the need for triangulation and verification; the same principles apply here, so that users and buyers of any service incorporating an A.I. element can be assured that they fully understand how the A.I. element was constructed and is applied, and be reassured that there is no hidden bias or synthetic data that may contaminate the findings.

There are many excellent, considerate and rigorous developers of A.I. systems in our sector, all of whom would have no qualms in explaining what they do and how they got there for those who are unhappy to share that information, ask yourself, “Why won’t they?"

Back to top