The building blocks of digital technology consists of information theory (which codifies content into binary 0/1 format) and transistors (essentially on/off switches). They were both invented during the hey-day of American research and scientific development company Bell Labs in the decade following WWII. Subsequently, each new and improved wave of digitisation has caused upheaval as it visits particular markets and occupations. However, from the perspective of the whole production and consumption system, progress has been relatively slow and staggered compared to what we are likely to see in the future.
In the 1950s, computers at even the most advanced tech locations in the US comprised two-storey buildings and only performed highly specialised and limited functions. It was not until the 1980s – when smaller mainframes became cheap and fast enough to replace routine operations – that digital technology effectively eliminated the labour market for clerical workers.
Automation, robots and digitally guided technologies started making inroads into manufacturing around this time. Although satellites have been used since the 1960s to provide market intelligence for producers (giving US farmers advice on what and how many crops their competitors were growing, for example), it took until the 2010s for satellite-aided location services to become ubiquitous and part of consumers’ daily lives.
More and more, digital disruption is being triggered by innovative software, such as travel search engines and language translation services, rather than hardware. Since software can be shifted into large-scale production much faster than hardware, this accelerates the pace of disruption.
One form of software that is playing an increasingly important role is a form of artificial intelligence called ‘machine learning’. Computers are governed by algorithms comprised of many rules that dictate “if X, then do Y”. These rules are usually set by the programmer(s) that wrote the algorithm code. But things are different in the case of machine learning algorithms. Such an algorithm can ‘learn’ from data by altering its own parameters, progressively improving its ability to determine patterns or predict future trends in the data (analogous to the way our brains learn from past experience).
For example, machine learning algorithms have been used for the past two decades in spam filters. When we label emails as spam, we are generating a labelled dataset that can be used to train a machine learning algorithm to recognise the properties of emails that are usually associated with spam. The trained algorithm can then remove such emails automatically.
Machine learning has even begun transforming the oldest of professions, such as medicine and the law, hitherto considered the preserve of nuanced interpretation and experiential knowhow. Law has long resisted automation from computers and digital analytics, in part because of the non-routine nature of contracts and litigation. However, this is now changing as machine learning methods have partially automated tasks by detecting patterns and inferring rules from data.
eDiscovery is one such digital tool used to assist lawyers’ search through emails and piles of office documents to find evidence needed to clinch a case (looking for the proverbial needle in a haystack). Machine learning can disrupt the eDiscovery process by efficiently bringing together similar documents based on their contents and metadata. Brainspace provides lawyers an eDiscovery tool that increases the efficiency and accuracy of finding information pertinent to a court case. Alternatively, ROSS, a machine learning law tool, can provide answers to legal research questions, posed using natural language, and can monitor recent legal developments that are relevant to a particular case.
In medicine, machine learning algorithms are increasingly being used to help perform radiological diagnoses. They can be trained to classify medical scans as normal or diseased, or to quantify the size of diseased areas. In the area of brain cancer, Microsoft’s InnerEye research project has been investigating the use of an image analysis tool to measure the size of brain tumours.
As these machine learning methods save lawyers’ and medicos’ time, we will see their labour productivity rise along with a major shift in content of their work, and perhaps a reduction in the demand for lawyers and medicos. Handled sensibly by governments, this reduced demand will release workers for other occupations in for example, the creative, scientific and caring industries.
Professor Beth Webster
Pro Vice-Chancellor of Swinburne University (Research Policy and Impact) and Director, Centre for Transformative Innovation
Dr Stephen Petrie, Data Scientist, Centre for Transformative Innovation
Mitchell Adams, Research Centre Manager, Centre for Transformative Innovation
Read next: Dr Bronwyn Evans, CEO of Standards Australia, traces the rise of blockchain technology and defines the framework needed to build trust in blockchain systems.
Spread the word: Help Australia become digital savvy nation! Share this piece on digital disruptors using the social media buttons below.