Beware the Unseen Problems with AI
What the artificially intelligent cannot see
If you haven’t heard the phrases ChatGPT, Large Language Model (LLM) or Generative AI recently then consider yourself lucky. Artificial Intelligence (AI) is getting it’s mainstream spotlight much like Cryptocurrencies has had in recent years, thanks to the power of one successful major advancement in this technology space.
Some could argue that we’re closer to Artificial General Intelligence (AGI) than we have ever been before (think Jarvis from Iron Man or any other human-like android from science fiction). But many others, including myself, can see through the smoke and can call it for what it is; artificially (un)intelligent.
The Unseen Prompts
With the reply-response nature of ChatGPT and contextual invocation through GitHub CoPilot, AI still requires that first interaction from our human body parts before our artificial counterparts spring into life with the regurgitated.
Machines need to be designed to interact through the invisible cues. When your friend or family member looks sad, your first thought would be to ask them is “what’s the matter?“. As much as devices listen to our every day drivel and watch our every prod of the screen or keyboard, they lack the ability to interact without physical invocation.
System designers can look to build cue processing based on computer vision but without contextual information it would risk sounding repetitive or insensitive. Tone of voice cannot be correctly picked up in text interaction alone, sarcastic comments can deceive machines into incorrect responses.
The Unseen Individuals
How useful AI is, is merely determined by its training data-set. If we’re led to believe that current LLMs have been trained on public websites / data-sets, then there is plenty of the world that current iterations have not seen.
AI is at risk of ignoring nations with limited access to technology, where they have been unable to document or share knowledge of their region to the outside world. Languages and dialect nuances not encountered by AI could also lead machines to be speaking double Dutch back to us.
Sensitivities in human religion and cultures require contextual safeguards when communicating, something AI must train for and identify ahead of time.
The Unseen Governance
Driving a lot of modern privacy concerns is what information is being captured by technology companies. To drive personalised responses, machines will have to capture the usual suspects; age, sex, location, an assortment of your preferences. If this is used to retrain AI, what guarantees are in place to secure / safe guard against exposure or cross-talk?
Further concern is the location of data, with information being transferred across regions to be sold or re-processed to sell you to advertisers. In an ever connected world, knowing your interactions with AI are not being watched by nation states and later used against you.
With all of these pre-trained models, who owns the underlying information and responses provided needs to be answered. If prompted to correct an essay or provide a generated paragraph, does the copyright belong to the LLM version, the AI developer or does it now belong to you? Especially if you pay for AI to provide you the work.
While knowledge can be made available to AI for training, building true understanding is difficult. Our minds are designed to learn and adapt to new information all the time, machines will need to do the same. However, with that capability, there’s risk we miss out the ability to contextually understand and adapt.
Much like the ocean that surrounds us, for what these large scale neural models have seen in training on the surface, there’s a deep beneath it has not seen.