Combining AAAAAAAAA and Intelligence

This article was inspired by a talk at Future Rewired 2023.

AI has become one of the 21st-century buzzwords. Even though the term Artificial Intelligence sounds cool, at its roots, AI is nothing but linear algebra, probabilities, and some program code. But why does AI have so much hype? Will AI take over? How does AI work? What types of AI are there?

If you joined us for Dr Patrizia Kaye's talk at Future Rewired 2023 'Combining AAAAAAAAA and Intelligence' then you might now have a general idea of what Artificial Intelligence entails, but if you didn't then we are going to cover some rudimentary knowledge in this article as well as some current day applications and benefits of adoption for businesses.  

This article was written in collaboration with Dr Patrizia Kaye.

Starting at the beginning, what counts as AI?

'Artificial Intelligence simulates the human thinking process by a computer or other machines.' Ideally, AI will replicate the problem-solving and decision-making skills we possess as human beings.

Among others, applications of artificial intelligence can be seen being used for, personalised shopping, AI-powered assistants such as Siri and Alexa, autonomous vehicles, spam filters, facial recognition and even in agriculture to identify defects and nutrient deficiencies in the soil. This is done using computer vision, robotics and machine learning applications.


Training an AI

In order to make an AI agent useful, we “train” it with data. Training is one of the most crucial and challenging stages of the AI development process and can determine the project’s success and failure.

The difficulty in training is the data. If there is a correlation in the data, the system can learn it, whether we are aware of that correlation or not. This may be exactly what we want if we are attempting to write detect spam, but may cause problems if certain data must be excluded from consideration, for instance; Amazon.com Inc's abandoned its AI recruiting tool they were working on because it showed a bias against women. The team had been building computer programs to review job applicants’ resumes, giving applicants scores ranging from one to five stars. Unfortunately, Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, so the system taught itself that male candidates were preferable, leading to it penalising resumes that included the word “women's” as in “women’s chess club captain” and downgraded graduates of two all-women’s colleges. 


As we know, using AI can be risky

AI has been hailed as revolutionary and world-changing, but it’s not without drawbacks. When combined with the so-called “Halo effect” (the tendency of people to trust technological solutions) and our own unconscious biases as humans, there is a risk that we may allow existing systemic biases and prejudices to be crystalised in an AI by failing to put datasets through sufficient scrutiny.

Whether it’s the increasing automation of certain jobs, the spread of fake news and deep fake content, gender and racially-biased algorithms or autonomous weapons that operate without human oversight, it's no wonder that some feel uneasy about the rapid development of AI technology.

Adoption of Generative AI/ChatGPT - toeing the line between exciting innovation and ethical conundrum

If you've spent any time online in the first few months of 2023, you may have already picked up on the buzz surrounding ChatGPT.

ChatGPT (Generative pre-trained transformer) is a chatbot from OpenAI that enables users to 'converse' with it in a way that's meant to mimic natural conversation. Users can ask questions, and make requests in the form of prompts and ChatGPT will provide a response. Such interfaces are gaining popularity as both an alternative to traditional search engines and as a tool for AI writing, amongst other things.

When any new technology breaks through into public awareness, there is a rush to capitalize on it in novel ways, but there is much more to human interaction than what is written in text, as Koko, a mental-health nonprofit discovered when it experimented on about 4,000 people with using ChatGPT to compose messages to provide mental health support to those in need, only to find that as soon as users learned that the messages were written with AI they felt disturbed by the simulated empathy. Even though the messages composed by AI (supervised by humans) were rated significantly higher than those written by humans on their own and the response times went down 50% to well under a minute.

A key question to ask here is: Who is accountable if these AI programmes make harmful suggestions/ content? The user of the application, the developer, the validation/quality assurance team of the organisation behind the programme, or somebody else?

AI vs Accuracy

How accurate is artificial intelligence, and what should your organisation consider before making business decisions around it? AI machine learning is about statistical, not functional relationships, and their accuracy depends on the training the system has achieved. Thus, sometimes it will be right and sometimes it will be very very wrong. 

For instance, let's take a look at citations. Useful for providing the source of information and crediting authors in an educational or professional environment, but outside of these exceptional circumstances, readers do not often check citations within articles. Unfortunately, in the case of Language Learning Models, the system has been trained to know what citations look like, and that they are used, but the true meaning and context elude them.

Indeed, there are so many cases of such models referencing articles, books, people, and events that do not exist, that the community has coined a term for them: hallucinations, which is hardly good ground on which to base decisions.

In business systems, rather than asking how accurate is our artificial intelligence model, the more important question is typically how accurate does the AI system need to be? A good example of this is Netflix's competition that offered a million-dollar prize to anyone who could improve their 'watch next' recommendation algorithm by 10%. After three years Netflix picked and paid a winning team, but opted to use a lower-ranked, simpler, and less costly algorithm that improved accuracy by 8.43%. The winning algorithm was too costly to implement and would have held back innovations that could take place in the meantime.

What are the benefits of adoption for businesses?  

Depending on the business, using AI tools can streamline processes and help cut costs. If you are a smaller business thinking, "How am I supposed to adopt AI?" don't fret, you don't have to start creating your own AI programmes to start benefitting from AI. For example, there are many existing programmes that aid with the automation of areas such as customer support, marketing, finance and administration.

Customer support can be streamlined with the use of chatbots, customer intent prediction and the redirection of customers to the appropriate departments to ease the pain points of a consumer. Marketing efforts can be made more effective with data-driven marketing (read our ABCs of effective decision-driven marketing here) by personalising the customer experience, managing social media, consumer research, website analytics and more. Google already offers a variety of tools to support marketing teams such as Google Analytics, Adwords and Trends while other tools such as Microsoft, Dropbox and Google Suite streamline employee-to-employee communication as well as file storage and maintenance.

All in all, with the right planning and development, AI technology could usher in a golden age of productivity, work satisfaction, and prosperity.

Want to know more about Future Rewired 2023?

read more here

You May Also Be Interested In


Global Entrepreneurship Week 2023


Find Your Tribe Co-working


Guernsey Venture Challenge '22 winners - Where are they now?