It’s a subject that has sparked thousands of sci-fi novels and movies. It has divided academics like Noam Chomsky and Paul Saffo, and brought Tech Superstars Elon Musk and Mark Zuckerberg to (Twitter) blows. The Churchill Club’s panel, ‘Implementing an AI solution - what you need to know’ was a much needed reality check on the practicalities, applications, benefits, and limitations of AI solutions at present and in the near future.
The spectrum of AI subsets and applications is influencing processes and interactions across industries - from banking to healthcare. As the technology evolves, businesses are scrambling to catch up in skills, understanding and approach.
What is AI? And why can that be a tricky question to answer?
Where can we see AI being used now and what results is it yielding?
Where is AI headed in the future?
Should you build your own AI solutions or buy ‘out of the box’?
Is Australia ready and skilled-up for the AI revolution?
Jonathan Chang - Managing Director, Silverpond
Karin Verspoor - Professor, School of Computing and Information Systems, University of Melbourne
Soon-Ee Cheah - Data Scientist, Zendesk
Mark Moloney - General Manager, Big Data Analytics, Telstra
Breaking through the hype, what is AI?
‘Artificial Intelligence’ was first coined in 1956 by computer scientist John McCarthy, and used to describe computers performing tasks that humans require intelligence to do. Over the past nearly 60 years, the term and the field of AI has evolved and expanded to include subsets such as Machine Learning and Deep Learning. It is these sub-sets that are delivering much of the AI we interact with today, and providing fertile ground for advancements into the future.
There was conjecture amongst panelists over the term, AI - some feeling it was more representative of the history of computing - preferring to use Machine Learning (ML) when citing specific examples. ML refers to the automation of analytical model building, where algorithms learn from data to solve specific tasks, without being told where to look. It is increasingly seeing applications in unsupervised learning where the task is not specified. For example; when you ask Siri who coined the term ‘AI’ - that is a specified task, but when a learner builds a model to detect when people are smiling based on facial patterns that is unsupervised learning. Learning data representations characterises Deep Learning (DL), a subset of ML and a modern enrichment of Neural Networks.
Uniting these subsets is the use of data, determination of insights, and a resulting action. An ‘altruistic algorithm’ is the ultimate objective, where planning and purpose meet to understand and achieve goals. Developing systems that are able to learn and augment as a human would in order to do this is the at the heart of what drives AI and all its subsets.
Where and how is AI being used today? What impact is it having?
AI is well and truly here, many of us have daily interactions with Siri, “Ok Google”, and Facebook Chatbots. Each uses language and/or audio data and conversational interfaces to respond to specific tasks but the reach of AI goes further than this.
Customer service certainly seems to be leading the way in rewards-reaped from the application of AI. Institutional data is centralised and language analysis is used to enable chatbots to apply memory and context to cases, while conversational interfaces allow for feedback. The result is consistency in experience, and a collective ‘raising of the bar’ in service as the technology augments the way that agents work.
Gaming is another high profile example of AI’s impact, and this year Google’s AlphaGo beat the world number 1 in one of the most demanding strategic games, Go. The win was not unprecedented; 20 years ago IBM’s Deep Blue beat the undefeated Chess Grandmaster, Gary Kasperov in a game that put a spotlight on the stark contrast between human decision-making and ML. Both AlphaGo and Deep Blue were retired after their respective wins, having reached the peak of success in each arena.
Precision medicine is wrestling with the complexities of using individual bio and hereditary data to develop tailored healthcare, and enact clinical decisions. It’s challenging the roles and training of healthcare professionals, but also showing promise in how a person’s unique data may be used to ensure their treatment is customised to their needs for ultimate impact.
DL algorithms are also being developed to aid in the identification of illnesses and conditions. This year Stanford Medical researchers announced they had developed an algorithm that could test for skin cancer. Individuals would be able to perform the initial test using their smartphone, without needing to visit their doctor. Applications such as these have the promise of making healthcare more accessible for those who lack the means.
Creative industries are also finding a place for algorithms. Computer-generated movies have been developed using only a screenplay, and last year short sci-fi film based entirely on a script developed by AI was released.
If AI is now undertaking roles and tasks previously performed by humans, what is our role moving forward?
DL is inspired by the brain, but at a superficial level and while algorithms are becoming increasingly sophisticated it is debatable as to whether they truly understand, or simply mimic. Though there are huge, and exciting, advancements in AI there is still a seismic difference between a mathematical model and a neurological cell. The space between cell and model is where humans still have a role to play, in both working with AI solutions and developing them.
Data science is emerging as a highly sought profession, and many large organisations - like Google and Facebook - are in an ‘arms race’ to accrue as much of the talent as they can. These individuals are viewed as the ultimate architects of AI solutions, designing the framework through which data is processed, insights identified and action taken. There is however, a deficiency of specialists, particularly in Australia. A shift in software engineering is required, and indeed teaching and training from primary and secondary level education is being encouraged, in order to equip young people with the skills they need to excel in a technical economy.
Do you need a Data Scientist to develop an AI solution for your business from scratch? Or can you save in time and money by purchasing out-of-the-box solutions like IBM’s Watson?
In short, yes. While solutions such as Watson have made AI more accessible, a data specialist can apply specificity to ensure the solution is tailored to your market. Publicly available data and purchasable API’s are at their most valuable when they’re used as tools in the process. Both can do much of the heavy lifting in data collection, and avoid businesses having to unnecessarily reinvent the wheel. However, not all data is created equal and boxed solutions are created with a specific customer and industry in mind. The data scientist is able to direct the use and labelling of data, and appropriate tools based on the business’ need.
Without the added value of a business’ own data and domain knowledge, you can undermine the solution and miss the opportunity to leverage your unique insights. If your business only has access to small data there are several ways that you can overcome this:
Transfer learning: using knowledge gained from solving other tasks to apply to related tasks
Topic modelling: identifying semantic structures
Frequency analysis: studying the frequency of letter or groups
One panelist summarised the custom versus purchased debate best - you don’t buy a suit off the rack for your own wedding, you get it custom made. Investment in an AI solution should be commensurate with its importance, the value a business places on their domain knowledge, and their unique position in the market.
If you do decide to build custom, who owns the IP?
It is commonplace for businesses to own their data and the solutions that are developed for them by third party data scientists and consultants. Suppliers own the tools they use to develop said solutions. Publicly available data are freely accessible, and purchased solutions are owned by their developers (i.e. Watson is owned by IBM).
Many ML applications are based on language and image processing. How are issues of bias and context being addressed?
Cultural nuances are difficult even for humans to identify. In cases such as sarcasm, individuals often flag it’s use in text messages or emails. If you collect enough data however, systems are able to identify these flags and act on the tone being employed. Using a data scientist and a business’ own data sets can also ensure cultural context is accounted for.
Bias has been a well-publicised flaw of AI, only recently a computer program used by US court officials was found to be biased against African-American prisoners. It’s these instances that demonstrate the influence humans have over AI, as the unconscious bias of data scientists present themselves in the solutions they develop. Exhibitions of these inadvertent human biases are helping us to identify, challenge and improve them.
What other challenges should a business should be aware of when developing an AI solution? How can they be overcome?
While AI technologically is beyond what we may have come across in past, it’s challenges are remarkably similar to the execution of any other new project business may take on;
It will take longer than you imagine - data collection for example, is often underestimated but this alone can take years. The key is to educate as you go and manage expectations - don’t undersell any aspect of the process.
Ensure you have a good team
Tech is one piece of the puzzle, but you need a good team to execute it. This doesn’t just pertain to technical skills or understanding, but also stakeholder management and domain knowledge. AI does not occur in a vacuum - investors and key business contacts need to understand and trust the process, and the business’ market advantage needs to be leveraged.
Packaging and testing are crucial
After years of data collection and building the solution, packaging and testing can often be overlooked. Planning and executing these steps correctly is important for assurance - confirming that the solution operates effectively within the context it was intended for.
Keeping your eye on the problem is the key to success throughout any AI project. Languages and tools are important, but the solution is still an answer to a question - be persistent, willing to learn and open to experimentation, and you will be well positioned to find and build it.