Image by intographics from Pixabay
In 1920, Czech playwright K. Čapek debuted R.U.R.: ‘Rossum’s Universal Robots’, showcasing a new artificial race of manufactured laborers. They were called robots; a word drawn from robota, Slavic for forced labor, applied to the feudal peasant laborers of those days.
In a sense, the narrative was set right at the beginning: robots – automatons – will eventually replace human labor, and in doing so will replace those who perform that labor. Today, under the banner of AI, these fears run rampant: luminaries like Stephen Hawking have repeatedly warned about the existential threats posed by AI, and research firms like McKinsey have estimated that by 2030, 60% of all jobs will be automated away. AI infringes on those spheres we considered impossible to breach – be it board games like Go, or the writing of novels, or the generation of art. A noted example is AlphaGo, which not only ‘taught itself’ to play a game long considered impossible to understand programmatically, but subsequently defeated the world’s Go grandmasters and is now being applied to real-world tasks. The hope of an uncomplaining labor class has been replaced by the fear that AI will ultimately replace humans throughout the job market and make them redundant. [Core: THE 4TH INDUSTRIAL PROLETARIAT].
Adding to the fear is the vagueness of the term ‘AI’ and the difficulty of explaining such models. An algorithm is a series of instructions for solving a problem. It is conventionally considered to be something that can be expressed in a finite number of symbols, in a well-defined, formal language that the agent carrying out the instructions can understand. Knuth imbibes it with certain added properties:
- Finiteness. An algorithm must always terminate after a finite number of steps. Similar procedures which differ only in that they do not terminate can be described as computational methods.
- Definiteness. Each step of an algorithm must be precisely defined; the actions to be carried out must be rigorously and unambiguously specified for each case.
- Input. An algorithm has zero or more inputs: quantities that are given to it initially before the algorithm begins, or dynamically as the algorithm runs.
- Output. An algorithm has one or more outputs: quantities that have a specified relation to the inputs.
- Effectiveness. An algorithm is also generally expected to be effective, in the sense that its operations must all be sufficiently basic that they can in principle be done exactly and in a finite length of time by someone using pencil and paper.
Historically, computer algorithms have generally stuck to these principles. Opaqueness existed as a function of business practices and anti-competitive law (eg: the FICO scoring algorithm), but this opaqueness was legal in nature and not technical.
However, the quest of machine learning has attempted to build computer programs that learn from experience in order to carry out a task. This is generally done by defining an algorithm that can take input and generate an intermediate algorithm (in computer science, called a model) that can then generate the desired output, a process referred to as training. In doing this, machine learning inserts technical opaqueness into algorithms. The initial algorithm fulfills the definition above, but the model does not. As DARPA notes, models are “opaque, non-intuitive and difficult for people to understand.” These properties have been subsumed into the term AI as a whole, leaving an impression of a hegemony of black-box models which are increasingly replacing humans – even in spheres of governance.
There is a counternarrative, however weak that signal may be. Alex Bates, the author of Augmented Mind: AI, Superhumans, and the Next Economic Revolution, argues that despite all the progress made in deep learning in recent years, even the cutting-edge AI systems today are performing nothing more than sophisticated pattern matching. He posits that AI systems and humans have distinctly different but complementary strengths: AI can sift through mountains of data, perform billions of repetitive calculations and identify patterns embedded in them, while humans can make intuitive leaps, melt ideas in unexpected ways and make use of interpersonal skills such as empathy and common sense for contextualization. Bates argues that if AI and humans can tap into their own unique strengths and work together in a symbiotic relationship, it could lead to unique breakthroughs.
Such harmonies have already been achieved in various fields. In chess, a field long since used to being defeated by AI, the field of Advanced Chess pairs humans and AI engines together to produce collaborative grandmasters who top the charts. In Japanese literature, AI has been used to co-write a novel that placed prominently in a famous literary competition. In more local waters, one of the authors of this Megatrends project has trained an AI to write poetry that he then edits, creating an Instagram poet persona that can generate large amounts of complex written work indistinguishable from that of a human poet. Research conducted by the Harvard Business Review, involving 1500 companies, has revealed that the most significant performance improvements are made when humans and machines work together.
A recent report published by Cognizant, a professional services company, shows that Asia Pacific is already benefiting from the human + AI approach. The Hong Leong Bank of Malaysia is using AI to assist its associates in identifying the emotions of its customers which help them deliver a better service. Tokyo Electric Power Company Holdings Inc. (TEPCO) in Japan has incorporated AI for continuous inspection and predictive maintenance freeing up engineers for more value-adding work. In manufacturing, the Korean company Hyundai has developed an industrial exoskeleton; a robotic wearable device which adapts to the user and location in real-time, while enabling the worker to perform a given job with superhuman endurance and strength.
These signals may very well rewrite the fear-based narrative of AI. According to research by UBS, a global wealth management company, the economic value created by Artificial Intelligence in the region will be between USD 1.8 trillion – USD 3 trillion by 2030. It is inevitable that the APAC region tap into this value; but the combination of lessons learned from the West, and new AI policies being developed, open a window for a collaborative future – one which may need radical rethinking around the future of what work will look like in every domain, but also one which, instead of fearing the 4th Industrial Revolution, can embrace it instead.
This report has been written by Yudhanjaya Wijeratne, Merl Chandana, Sriganesh Lokanathan and Shazna Zuhyle of LIRNEasia with commissioning by the UNDP Regional Innovation Centre (RIC) as an exploratory and intellectual analysis; the views and opinions published in this work are those of the authors and do not necessarily reflect or represent the official position or policy of the RIC, United Nations Development Programme or any United Nations agency or UN Member States.
Comments are closed.