Tag Archives: data

Beat the News with digital footprints

Every day we produce an almost unfathomable amount of data. Posting on Twitter, Facebook, Instagram and YouTube. Commenting in chat rooms; blogging; trading stock tips; and decorating hacks in niche forums. We broadcast what we’re eating, feeling and doing from our GPS-equipped smartphones, sharing maps of our runs, photos from shows, and news that gets us cranky or inspired.

The details of our passing moods are all there, creating a vital if virtual public pulse.

Dr Brenton Cooper’s Data to Decisions (D2D) CRC team checks this pulse and, by extracting signals from our collective digital footprint, shows where we’re going next.

Are we gearing up to strike? Or celebrate? Is disease spreading? What effect will an interest rates hike have? Are we about to toss out the government, or move money out of the market?

Whatever the social disruption, D2D CRC’s Beat The News ™ forecasting system can issue a warning – before it happens. In March 2016, it accurately forecasted the impact of an anti-coal port protest in Newcastle, NSW. The following May, no ships could move during the protest blockade, costing an estimated $20 million.


“This warning system tells you what might happen, when it will happen and why.”


Social media monitoring is already a billion-dollar industry, and Cooper, who is D2D CRC’s Chief Technology Officer, knows “there are plenty of tools that help you understand what’s happening right now. But this tells you what might happen, when it will happen and why.”

This sort of heads-up will be invaluable. D2D CRC’s first collaborators are Australia’s defence and national security agencies, whose analysts now have a Beat The News ™ dashboard that sifts through about two billion data points a day.

“These are people paid to understand the political climate, but they can’t read everything,” explains Cooper. “That’s where machine-enablement certainly helps.”

Maybe the agencies are watching Indonesian politics and want to know if there might be some unrest in the capital Jakarta. Beat The News ™ analyses a huge volume of open-source information, combining structured and unstructured data from a wide range of sources. It geo-locates posts, extracts key words, topics, trends and hashtags, and measures sentiment.

“Once we’ve done those types of data enrichments, we then pump it through a variety of models,” says Cooper, “to automatically and accurately predict the answer.”

The potential applications are many, so the CRC recently trademarked Fivecast™ – “as in forecast, only one better,” says Cooper – to take the system to market, whether as a spin-off company, licensing to a partner, or licensing the IP to a third party.

US company Dataminr has raised more than US$130 million from investors for its real-time analytics, but Cooper says Fivecast™ will offer a further capacity – event prediction. It’s the only predictive geopolitical risk analytics platform on the market. Corporate risk consultancies are already interested. Their clients include global mall conglomerates alert to anything that might stop people enjoying their shopping.

Find out more about Beat The News ™ at d2dcrc.com.au

– Lauren Martin

You might also enjoy ‘Disrupting Terrorism and Crime’. Sanjay Mazumdar, CEO of the Data to Decisions CRC (D2D CRC), takes a look at what the national security sector can learn from Big Data disruption.

Top stories of the year

Featured image above: AI progress makes history – #2 of the top stories in STEM from 2016.

1. New way to cut up DNA

On October 28, a team of Chinese scientists made history when they injected the first adult human with cells genetically modified via CRISPR, a low-cost DNA editing mechanism.

Part of a clinical trial to treat lung cancer, this application of CRISPR is expected to be the first of many in the global fight against poor health and disease. 

2. AI reads scientific papers, distils findings, plays Go

Artificially intelligent systems soared to new heights in 2016, taking it to number 2 on our list of top stories. A company called Iris created a new AI system able to read scientific papers, understand their core concepts and find other papers offering relevant information.

In the gaming arena, Google’s DeepMind AlphaGo program became the first AI system to beat world champion, Lee Se-dol, at the boardgame Go. Invented in China, Go is thought to be at least 2,500 years old. It offers so many potential moves that until this year, human intuition was able to prevail over the computing power of technology in calculating winning strategies. 

3. Scientists find the missing link in evolution

For a long time, the mechanism by which organisms evolved from single cells to multicellular entities remained a mystery. This year, researches pinpointed a molecule called GK-PID, which underwent a critical mutation some 800 million years ago.

With this single mutation, GK-PID gained the ability to string chromosomes together in a way that allowed cells to divide without becoming cancerous – a fundamental enabler for the evolution of all modern life. GK-PID remains vital to successful tissue growth in animals today. 

4. Data can be stored for 13.8 billion years

All technology is subject to degradation from environmental influences, including heat. This means that until recently, humans have been without any form of truly long-term data storage.  

Scientists from the University of Southampton made the top stories of 2016 when they developed a disc that can theoretically survive for longer than the universe has been in existence. Made of nano-structured glass, with the capacity to hold 360TB of data, and stable up to 1,000°C, the disc could survive for over 13.8 billion years. 

5. Mass coral bleaching of the Great Barrier Reef

The most severe bleaching ever recorded on the Great Barrier Reef occurred this year. Heavy loss of coral occurred across a 700km stretch of the northern reef, which had previously been the most pristine area of the 2300km world heritage site.

North of Port Douglas, an average of 67% of shallow-water corals became bleached in 2016. Scientists blame sea temperature rise, which was sharpest in the early months of the year, and which resulted in a devastating loss of algae that corals rely on for food. 

6. Climate protocol ratified – but Stephen hawking warns it may be too late

On the 4 November 2016, the Paris Agreement became effective. An international initiative to reduce greenhouse gas emissions and control climate change, the Paris Agreement required ratification by at least 55 countries representing 55% of global emissions in order to become operational.

So far 117 countries have joined the cause, with Australia among them. But some of the world’s greatest minds, including Stephen Hawking, believe time is running out if the human race is to preserve its planet. 

7. Young people kick some serious science goals

A group of high schoolers from Sydney Grammar succeeded in recreating a vital drug used to treat deadly parasites, for a fraction of the market price.

The drug, known as Daraprim, has been available for 63 years and is used in the treatment of malaria and HIV. There was public outcry in September when Turing Pharmaceuticals raised the price of the drug from US$13.50 to US$750. 

In collaboration with the University of Sydney and the Open Source Malaria Consortium, a year 11 class at Sydney Grammar created the drug at a cost of only $2 per dose, and made their work freely available online.

8. Gravitational waves detected

Albert Einstein’s general theory of relativity was confirmed in February, when scientists observed gravitational waves making ripples in space and time. 

Gravitational waves are thought to occur when two black holes merge into a single, much larger, black hole. They carry important information about their origins, and about gravity, that helps physicists better understand the universe. 

The gravitational waves were observed by twin Laser Interferometer Gravitational-wave Observatory detectors in Louisiana and Washington. Australian scientists helped to build some of the instruments used in their detection.

9. Moving away from chemotherapy

Researchers at the University College London made a leap forward in cancer treatment when they found a way to identify cancer markers present across all cells that have grown and mutated from a primary tumour. They also succeeded in identifying immune cells able to recognise these markers and destroy the cancerous cells. 

This breakthrough opens the door not only for better immuno-oncology treatments to replace the toxic drugs involved in chemotherapy, but also for the development of personalised treatments that are more effective for each individual.

10. New prime number discovered

The seventh largest prime number ever found was discovered in November. Over 9.3 million digits long, the number 10223*231172165+1 was identified by researchers who borrowed the computer power of thousands of collaborators around the world to search through possibilities, via a platform called PrimeGrid. 

This discovery also takes mathematicians one step closer to solving the Sierpinski problem, which asks for the smallest, positive, odd number ‘k’ in the formula k x 2n + 1, where all components of the formula are non-prime numbers. After the discovery of the newest prime number, only five possibilities for the Sierpinski number remain.

– Heather Catchpole & Elise Roberts

If you enjoyed this article on the top stories of the year, you might also enjoy:

Gravity waves hello

Have a story we missed? Contact us to let us know your picks of the top stories in STEM in 2016.

Honeybee health: a #dataimpact story

Featured image above: Environmental stressors which alter bee pollination, like extreme weather and pesticides, are assessed using large data sets generated by bees from all over the world via fitted micro-sensor ‘backpacks’. Credit: Giorgio Venturieri

Bee colonies are dying out worldwide and nobody is exactly sure why. The most obvious culprit is the Varroa mite which feeds on bees and bee larvae, while also spreading disease. The only country without the Varroa mite is Australia. However, experts believe that there are many factors affecting bee health.

To unravel this, CSIRO is leading the Global Initiative of Honeybee Health (GIHH) in gathering large sets of data on bee hives from all over the world. High-tech micro-sensor ’backpacks’ are fitted to bees to log their movements, similar to an e-tag. The data from individual bees is sent back to a small computer at the hive.

Researchers are able to analyse this data to assess which stressors – such as extreme weather, pesticides or water contamination – affect the movements and pollination of bees.

Maintaining honey bee populations is essential for food security as well securing economic returns from crops. Bee crop pollination is estimated to be worth up to $6 billion to Australian agriculture alone.

Currently 50,000 bees have been tagged and there may be close to one million by the end of 2017. Researchers aim to not only improve the health of honey bees but to increase crop sustainability and productivity through pollination management.

This article was first published by the Australian National Data Service on 10 October 2016. Read the original article here.

You might also enjoy:

Birth defects: a data discovery

 

Birth defects: a data discovery

Professor Fiona Stanley is well known for her work in using biostatistics to research the causes and prevention of birth defects, including establishing the WA Maternal and Child Health Research Database in 1977.

In 1989 Professor Stanley and colleague Professor Carol Bower used another database, the WA birth defects register, to source subjects for a study of neural tube defects (NTDs). The neural tube is what forms the brain and spine in a baby. Development issues can lead to common but incurable birth defects  such as spina bifida where the backbone does not close over the spinal cord properly.

The researchers measured the folate intake of 308 mothers of children born with NTDs, other defects, and no defects. They discovered that mothers who take the vitamin folate during pregnancy are less likely to have babies with NTDs. Their data contributed to worldwide research that found folate can reduce the likelihood of NTDs by 70%.

After the discovery Professor Stanley established the Telethon Kids Institute where she continued to research this topic alongside Professor Bower. Together they worked on education campaigns to encourage pregnant women to take folate supplements.

Their great success came in 2009 when the Australian government implemented mandatory folic acid fortification of flour. The need for such legislation is now recognised by the World Health Organisation.

A 2016 review conducted by the Australian Institute of Health and Welfare found that since the flour fortification program’s introduction, levels of NTDs have dropped by 14.4%.

– Cherese Sonkkila

This article was first published by the Australian National Data Service on 12 September 2016. Read the original article here.

Read next: Big data, big business.  Whether it’s using pigeons to help monitor air quality in London or designing umbrellas that can predict if it will rain, information is becoming a must-have asset for innovative businesses.

Australian research funding infographic

Featured image above: CSIRO has received significant budget cuts in recent years. Credit: David McClenaghan

The election is rapidly approaching, and all major parties – Liberal, Labor and Greens – have now made announcements about their policies to support science and research.

But how are we doing so far? Here we look at the state of science and research funding in Australia so you can better appreciate the policies each party has announced.

The latest OECD figures show that Australia does not fare well compared with other OECD countries on federal government funding research and development.

As a percentage of GDP, the government only spends 0.4% on research and development. This is less than comparable nations.

CC BY-ND

But looking at total country spending on research and development, including funding by state governments and the private sector, the picture is not so bleak: here Australia sits in the middle among OECD countries.

CC BY-ND

Over the years, there have been hundreds of announcements and new initiatives but this graph indicates that, in general, it has been a matter of rearranging the deck chairs rather than committing to strategic investments in research.

The Paul Keating Labor government made some investments. During the John Howard Liberal government’s years, there were ups and downs. The Kevin Rudd/Julia Gillard Labor governments were mostly up. And in Tony Abbott’s Liberal government, the graph suggests that it was mostly down with science.

CC BY-ND

Over the past decade, there have been some minor changes in funding to various areas, although energy has received the greatest proportional increase.

CC BY-SA

This pie chart reminds us that the higher education sector is a major provider of research and is highly dependent on government funding. It also tells us that business also conducts a great deal of research.

CC BY-ND

The timeline below shows that the government does listen and respond when issues arise. It has recognised the importance of the National Collaborative Research Infrastructure Scheme (NCRIS), the Australian Synchrotron and sustainable medical research funding by different initiatives.

But, sadly, one must remember that funding is effectively being shifted from one domain to another, and it has seldom been the case that significantly new commitments are made. The balance of red and blue shows how one hand gives while the other takes funding away.

CC BY-SA

This useful graph highlights the fact that Australian Research Council (ARC) funding now amounts to little more than the National Health and Medical Research Council’s funding.

This is remarkable, given that the ARC funds all disciplines, including sciences, humanities and social sciences, while the NHMRC essentially focuses on human biology and health.

CC BY-SA

This graphic also highlights the lack of any sustained funding strategy. The only clear trend is that the investment in the ARC has gradually declined and the NHMRC has grown.

This, in part, reflects the undeniable importance of health research. But it is also indicative of effective and coherent organisation and communication by health researchers. This has been more difficult to achieve in the ARC space with researchers coming from a vast array of disciplines.

– Merlin Crossley, Deputy Vice-Chancellor Education and Professor of Molecular Biology, UNSW Australia
– Les Field, Secretary for Science Policy at the Australian Academy of Science, and Senior Deputy Vice-Chancellor, UNSW Australia
This article was first published by The Conversation on June 22 2016. Read the original article here.

Biobank speeds autism diagnosis

The Autism CRC is building Australia’s first Autism Biobank, with the aim of diagnosing autism earlier and more accurately using genetic markers. Identifying children at high risk of developing autism at 12 months of age was “a bit of a holy grail”, says Telethon Kids Institute’s head of autism research Professor Andrew Whitehouse, who will be leading the Biobank. Researchers think the period between 12–24 months of age is “a key moment” in brain development, he adds.

Autism Diagnosis
Professor Andrew Whitehouse, Head of the Developmental Disorders Research Group at the Telethon Kids Institute

As with other neurodevelopmental disorders, a diagnosis of autism is based on certain behaviours, but these only begin to manifest at a diagnosable level between the ages of two and five. Whitehouse says while there are great opportunities for therapy at these ages, researchers believe an earlier diagnosis will make the therapy programs more effective. Some 12-month-old children already exhibit behaviours associated with the risk of developing autism, for example not responding to their name, but currently doctors can’t conclusively diagnose autism at this early age.

“If we can start our therapies at 12 months, we firmly believe they’ll be more effective and we can help more kids reach their full potential,” says Whitehouse.

The biology of autism varies greatly between individuals, and it appears a combination of environmental factors and genes are involved – up to 100 genes may play a role in its development. Studying large groups of people is the only way to get a full understanding of autism and potentially identify genes of importance.

To do this, the Biobank collects DNA samples from 1200 families with a history of autism – children with autism aged 2–17 years old, who are recruited through therapy service providers, and their parents – as well as samples from control families who do not have a history of autism.

autism diagnosis
DNA samples are taken at the Telethon Kids Institute and sent to the ABB Wesley Medical Research Tissue Bank to be analysed for genetic biomarkers. Credit: Telethon Kids Institute

The samples are then shipped to the ABB Wesley Medical Research Tissue Bank in Brisbane for the Biobank’s creation. Here, they are analysed for genetic biomarkers using genome wide sequencing – determining DNA sequences at various points along the genome that are known to be important in human development. Whitehouse says they are also planning to conduct metabolomic and microbiomic analyses on urine and faeces.

“It’s the biggest research effort into autism ever conducted in Australia,” he says.

The goal is to use the results to develop a genetic test that can be conducted with 12-month-old children who are showing signs of autism. The samples will also be stored at the Biobank for future research.

The aim is to expand internationally, so that researchers can exchange data with teams around the globe who are doing similar work, thus increasing the sample size.

– Laura Boness

If your child has been diagnosed with autism and you would like to find out about participating in the Autism CRC Biobank, click here.

www.autismcrc.com.au

Algorithms making social decisions

In this Up Close podcast episode, informatics researcher Professor Paul Dourish explains how algorithms, as more than mere technical objects, guide our social lives and organisation, and are themselves evolving products of human social actions.

“Lurking within algorithms, unknown to people and certainly not by design, can be all sorts of unconscious biases and discriminations that don’t necessarily reflect what we as a society want.”

The podcast is presented by Dr Andi Horvath. Listen to the episode below, or read on for the full podcast transcript.

Professor Paul Dourish

Dourish is a Professor of Informatics in the Donald Bren School of Information and Computer Sciences at University of California, Irvine, with courtesy appointments in Computer Science and Anthropology.

His research focuses primarily on understanding information technology as a site of social and cultural production; his work combines topics in human-computer interaction, social informatics, and science and technology studies.

He is the author, with Genevieve Bell, of Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (MIT Press, 2011), which examines the social and cultural aspects of the ubiquitous computing research program. He is a Fellow of the Association for Computing Machinery (ACM), a member of the Special Interest Group for Computer-Human Interaction  (SIGCHI) Academy, and a recipient of the American Medical Informatics Association (AMIA) Diana Forsythe Award and the Computer Supported Co-operative Work (CSCW) Lasting Impact Award.

Podcast Transcript

VOICEOVER: This is Up Close, the research talk show from the University of Melbourne, Australia.

HORVATH: I’m Dr Andi Horvath. Thanks for joining us. Today we bring you Up Close to one of the very things that shapes our modern lives. No, not the technology as such, but what works in the background to drive it: the algorithm, the formalised set of rules governing how our technology is meant to behave.

As we’ll hear, algorithms both enable us to use technology and to be used by it. Algorithms are designed by humans and just like the underpinnings of other technologies, say like drugs, we don’t always know exactly how they work. They serve a function but they can have side-effects and unexpectedly interact with other things with curious or disastrous results.

Today, machine learning means that algorithms are interacting with, or developing other algorithms, without human input. So how is it that they can have a life of their own? To take us Up Close to the elusive world of algorithms is our guest, Paul Dourish, a Professor of Informatics in the Donald Bren School of Information and Computer Science at UC Irvine. Paul has written extensively on the intersection of computer science and social science and is in Melbourne as a visiting Miegunyah Fellow. Hello, and welcome to Up Close.

DOURISH: Morning, it’s great to be here.

HORVATH: Paul, let’s start with the term algorithm. We hear it regularly in the media and it’s even in product marketing, but I suspect few of us really know what the word refers to. So let’s get this definition out of the way: what is an algorithm?

DOURISH: Well, it is a pretty abstract concept so it’s not surprising if people aren’t terrible familiar with it. An algorithm is really just a way of going about doing something, a set of instructions or a series of steps you’ll go through in order to produce some kind of computational result. So for instance, you know, when we were at school we all learned how to do long multiplication and the way we teach kids to do multiplication, well that’s an algorithm. It’s a series of steps that you can go through and you can guarantee that you’re going to get a certain kind of result. So algorithms then get employed in computational systems, in computer systems to produce the functions that we want.

HORVATH: Where do we find algorithms? If I thought about algorithm-spotting say on the way to work, where do we actually encounter them?

DOURISH: Well, if you were to take the train, for instance, algorithms might be controlling the rate at which trains arrive and depart from stations to try to manage a stable flow of passengers through a transit system. If you were to Google something in the morning to look up something that you were going to do or perhaps to prepare for this interview, well an algorithm not only found the information for you on the internet, but it was also used to sort those search results and decide which one was the one to present to you at the top of the list and which one was perhaps going to come further down. So algorithms are things that lie behind the operation of computer systems; sometimes those are computer systems we are using directly and sometimes they are computer systems that are used to produce the effects that we all see in the world like for instance, the flow of traffic.

HORVATH: So Paul, we use algorithms every day in everything, whether it’s work, rest, play, but are we also being used by algorithms?

DOURISH: Well, I guess there’s a couple of ways we could think about that. One is that we all produce data; the things that we do produce data that get used by algorithms. If we want to think about an algorithm for instance that controls the traffic lights and to manage the flow of people through the streets of Melbourne, well, the flow of people through the streets of Melbourne is also the data upon which that algorithm is working. So we’re being used by algorithms in the sense perhaps that we’re all producing the data that the algorithm needs to get its job done.

But I think there’s also a number of ways in which we might start to think that we get enrolled in the processes and effects of algorithms, so if corporations and government agencies and other sorts of people are making use of algorithms to produce effects for us, then our lives are certainly influenced by those algorithms and by the kinds of ways that they structure our interaction with the digital world.

HORVATH: So algorithms that are responsible for say datasets or computational use, the people who create them are quite important. Who actually creates these algorithms? Are they created by governments or commerce?

DOURISH: They can be produced in all sorts of different kinds of places and if you were in Silicon Valley and you were the sort of person who had a brand new algorithm, you might also be the sort of person who would have a brand new start-up. By and large, algorithms are produced by computer scientists, mathematicians and engineers.

Many algorithms are fundamentally mathematical at their heart and one of the ways in which computer scientists are interested in algorithms is to be able to do mathematical analysis on the kinds of things that computers might do and the sort of performance that they might have. But computer scientists are also generally in the business of figuring out ways to do things and that means basically producing algorithms.

HORVATH: One of the reasons we hear algorithms a lot these days is because they’ve caused problems, or at least confusion. Can you give us some tangible examples of where that’s happened?

DOURISH: Sure. Well, I think we see a whole lot of those and they turn up in the paper from time to time, and some are kind of like trivial and amusing and some have serious consequences. From the trivial side and the amusing side we see algorithms that engage in classification, which is an important area for algorithmic processing, and classifications that go wrong, places where an algorithm decides that because you bought one product you are interested in a particular class of things and it starts suggesting all these things to you.

I had a case with my television once where it had decided because my partner was recording Rocky and Bullwinkle, which is an old 1970s cartoon series [for just] America featuring a moose and a squirrel, that I must be interested in a hunting program so it started recording hunting shows for me. So although they’re silly, they begin to show the way that algorithms have a role.

The more serious ones though are ones that begin to affect commerce and political life. A famous case in 2010 was what was called the flash crash, a situation in which the US stock market lost then suddenly regained a huge amount of value, about 10 per cent of the value of their system, all within half an hour, and nobody really knew why it happened. It turned out instead of human beings buying and trading shares, it was actually algorithms buying and trading shares. The two algorithms were sort of locked in a loop, one trying to offer them for sale and one trying to buy them up, and suddenly it spiralled out of control. So these algorithms, because they sort of play a role in so many different systems and appear in so many different places, can have these big impacts and in even those small trivial cases or ones that begin to alert us or tune us to where the algorithms might be.

HORVATH: Tell us about privacy issues; that must be something that algorithms don’t necessarily take seriously.

DOURISH: Well, of course the algorithm works with whatever data it has to hand, and data systems may be more or less anonymised, they may be more or less private. One of the interesting problems perhaps is that the algorithm begins to reveal things that you didn’t necessarily know that your data might reveal.

For example, I might be very careful about being tracked by my phone. You know, I choose to turn off those things that say for instance where my home is, but if an algorithm can detect that I tend to be always at the same place at 11 o’clock at night or my phone is always at the same place at 11 o’clock at night and that’s where I start my commute to work in the morning, then those patterns begin to build up and there can be privacy concerns there. So algorithms begin to identify patterns in data and we don’t necessarily know what those patterns are, nor are we provided necessarily with the opportunity to audit, control, review or erase data. So that’s where the privacy aspects begin to become significant.

HORVATH: Is there an upsurge about societal concerns about algorithms? Really, I’m asking you the question, why should we care about algorithms? Do we need to take these more seriously?

DOURISH: I think people are beginning to pay attention to the ways in which there can be potentially deleterious social effects. I don’t want to sit here simply saying that algorithms are dangerous and we need to be careful, but on the other hand there is this fundamental question about knowing what it is the algorithm is doing and being conscious of its fairness.

On the trivial side, there is an issue that arose around the algorithm in digital cameras to detect faces, when you want to focus on the face. It turned out after a while that the algorithms in certain phones looked predominantly for white faces but were actually very bad at detecting black faces. Now, those kinds of bias aren’t very visible to us, as the camera just doesn’t work. Those are perhaps where as a society we need to start thinking about what is being done for us by algorithms, because lurking within those algorithms, unknown to people and certainly not by design, can be all sorts of unconscious biases and discriminations that don’t necessarily reflect what we as a society want.

HORVATH: Are we being replaced by algorithms? Is this something that’s threatening jobs as we know it?

DOURISH: Well, I certainly see plenty of cases where people are concerned about that and talk about it, and there’s been some in the press in the last couple of years that talk for instance about algorithms taking over HR jobs in human resources, interviewing people for jobs or matching people for jobs. By and large though, lots of these algorithms are being used to supplement and augment what people are doing. I don’t think we’ve seen really large-scale cases so far of people being replaced by algorithms, although it’s certainly a threat that employers and others can hold over people.

HORVATH: Sure. Draw the connection for us between algorithms and this emerging concept of big data.

DOURISH: Well, you can’t really talk about one without the other; they go together so seamlessly. Actually, one of the reasons that I’ve been talking about algorithms lately is precisely because there’s so much talk about big data just now. The algorithms and the data go together. The data provides the raw material that the algorithm processes and the algorithm is generally what makes sense of that data.

We talk about big data not least in terms of this idea of being able to capture and collect, to get information from all sorts of sensors, from all sorts of things about the world, but it’s the algorithm that then comes in and makes sense of that data, that identifies patterns and things that we think are useful or interesting or important. I might have a large collection of data that tells me what everybody in Victoria has purchased in the supermarket for the last month, but it’s an algorithm that’s going to be able to identify within that dataset well, here are dual income families in Geelong or the sort of person who’s interested in some particular kind of product and amenable to a particular kind of marketing. So they always go together; you never have one without the other.

HORVATH: But surely there are problems in interpretation and things get lost in translation.

DOURISH: That’s a really interesting part of the whole equation here. It’s generally human beings have to do the interpretation; the algorithm can identify a cluster. It can say, look, these people are all like each other but it tends to be a human being who comes along and says now, what is it that makes those people like each other? Oh, it’s because they are dual income families in Geelong. There’s always a human in the loop here. Actually, the problem that we occasionally encounter, and it’s like that problem of inappropriate classification that I mentioned earlier, the problem is that often we think we know what the clusters are that an algorithm has identified until an example comes along that shows oh, that wasn’t what it was at all. So the indeterminacy of both the data processing part and the human interpretation is where a lot of the slippage can occur.

HORVATH: I’m Andi Horvath and you’re listening to Up Close. In this episode, we’re talking about the nature and consequences of algorithms with informatics expert Paul Dourish. Paul, given that algorithms are a formalised set of instructions, can’t they simply be written in English or any other human language?

DOURISH: Well, algorithms certainly are often written in English. There’s all sorts of ways in which we write them down. Sometimes they are mathematical equations that live on a whiteboard. They often take the form of what computer scientists call pseudo-code, which looks like computer code but isn’t actually executable by a computer, and sometimes they are in plain English. I used the example earlier of the algorithm that we teach to children for how to do multiplication; well, that was explained to them in plain English. So they can take all sorts of different forms. Really, that’s some of the difficulty about the notion of algorithm is this very abstract idea and it can be realised in many different kinds of ways.

HORVATH: So the difference between algorithms and codes and pseudo-codes are different forms of abstraction?

DOURISH: In a way, yes. Computer code is the stuff that we write that actually makes computers do things, and the algorithm is a rough description of what that code might be like. Real programs are written in specific programming languages. You might have heard of C++ or Java or Python, these are programming languages that people use to produce running computer systems. The pseudo-code is a way of expressing the algorithm that’s independent of any particular programming language. So if I have a great algorithm, an idea for how to produce a result or sort a list or something, I can express it in the pseudo-code and then different programmers who are working in different programming languages can translate the algorithm into the language that they need to use to get their particular work done.

HORVATH: Right. Now, I’ve heard one of the central issues is that we can’t really read the algorithm once it’s gone into code. It’s like we can’t un-cook the cake or reverse engineer it. Why is that so hard?

DOURISH: Well, we certainly can in some cases; it’s not a hard and fast rule. In fact, most computer science departments, like the one here at Melbourne, will teach people how to write code so that you can see what’s going on. But there are a couple of complications that certainly can make it more difficult.

The first is that the structure of computer systems requires that you do more things than simply what the algorithm describes. An algorithm is an idealised version of what you might do, but in practice I might have to do all sorts of other things as well, like I’m managing the memory of the computer and I’m making sure the network hasn’t died and all these things. My program has lots of other things in it that aren’t just the algorithm but are more complicated.

Another complication is that sometimes people write code in such a way that it hides the algorithm for trade secret purposes. I don’t want to have somebody else pick up on and get my proprietary algorithm or the secret source for my business or program, and so I write the software in a deliberately somewhat obscure way.

Then the other problem is that sometimes algorithms are distributed in the world, they don’t all happen in one place. I think about the algorithms for instance that control how data flows across the internet and tries to make sure there isn’t congestion and too much delay in different parts of the network. Well, those algorithms don’t really happen in one place, they happen between different computers. Little bits of it are on one computer and little bits of it are on the other and they act together in coordination to produce the effect that we desire, so it can be often hard to spot the algorithm within the code.

HORVATH: Tell us more about these curious features of algorithms. They almost sound like a life form.

DOURISH: Well, I think what often makes algorithms seem to take on a life of their own, if you will, is that intersection with data that we were talking about earlier, because I said data and algorithms go together. There is often a case for instance where I can know what the algorithm does but if I don’t know enough about the data over which the algorithm operates, all sorts of things can happen.

There’s a case that I like to use as an example that came from some work that a friend of mine did a few years ago where he was looking at the trending topics on Twitter, and he was working particularly with people in the Occupy Wall Street movement who were sure that they were censored because their movement, the political discussion around Occupy Wall Street, never became a trending topic on Twitter. People were outraged, how can Justin Bieber’s haircut be more important than Occupy Wall Street? When they talked to the Twitter people, the Twitter people were adamant that they weren’t censoring this, but nonetheless they couldn’t really explain in detail why it was that Occupy Wall Street had not become a trending topic.

You can explain the algorithm and what it does, you can explain the mathematics of it, you can explain the code, you can show how a decision is made, but that decision is made about a dataset that’s changing rapidly, that’s to do with everything that’s being Tweeted online, everything that’s being retweeted, where it’s being retweeted, where it’s being retweeted, how quickly it’s being retweeted. What the algorithm does, even though it’s a known, engineered artefact, is still itself somehow mysterious.

So the lives that algorithms take on in practice for us when we encounter them in the world or when they act upon us or when they pop up in our Facebook newsfeed or whatever, is often unknowable and mysterious and lively, precisely because of the way the algorithm is entwined with an ever roiling dataset that keeps moving.

HORVATH: I love the term machine learning, and it’s really about computers interacting with computers, algorithms talking to other algorithms without the input of humans. That kind of spooks me. Where are we going?

DOURISH: Yeah. Well, I think the huge, burgeoning interest in machine learning has been spurred on by the big data movement. Machine learning is something that I was exposed to when I was an undergraduate student back more years ago than I care to remember; it’s always been there. But improvements in statistical techniques and the burgeoning interest in big data and the new datasets mean that machine learning has taken on a much greater significance than it had before.

What machine learning algorithms typically do is they identify again patterns in datasets. They take large amounts of data and then they tell us what’s going on in that. Inasmuch are we are generating more and more data and inasmuch as more and more of our activities move online and then become, if you like, “datafiable”, things that can now be identified as data rather than just as things we did, there is more and more opportunity for algorithms, and particularly for machine learning algorithms, to identify patterns within that.

I think the question, as we said, is to what extent one knows what a machine learning algorithm is saying about one. Indeed, even, as I suggested with the Twitter case, even for people who work in this space, even for people who are developing the algorithms, it can be hard for them to know. It’s that sort of issue of knowing, of being able to examine the algorithms, of making algorithms accountable to civic, political and regulatory processes, that’s where some of the real challenges are that are posed by machine learning algorithms.

HORVATH: We’re exploring the social life of algorithms with computer and social scientist Paul Dourish right here on Up Close. And yes, we’re coming to you no doubt thanks to several useful algorithms. I’m Andi Horvath. Let’s keep moving with algorithms. You say that algorithms aren’t just technical, that they’re social objects. Can you tell us a bit more what that means?

DOURISH: Well, I think we can come at this from two sides. One side is the algorithms are social as well as technical because they’re put to social uses. They’re put to uses that have an impact on our world. For example, if I’m on Amazon and it recommends another set of products that I might like to look at, or it recommends some and not others, there’s some questions in there about why those ones are just the right ones. Those are cases where social systems, systems of consumption and purchase and identification and so forth are being affected by algorithms. That’s one way in which algorithms are social; they’re put to social purposes.

But of course, the other way that algorithms are social is that they are produced by people and organisations and professions and disciplines and all sorts of other things that have a grounding in the social world. So algorithms didn’t just happen to us, they didn’t fall out of the sky, we have algorithms because we make algorithms. And we make algorithms within social settings, and they reflect our social ideas or our socially-constructed ideas about what’s desirable, what’s interesting, what’s possible and what’s appropriate. Those are all ways in which the algorithms are pre-social. They’re not just social after the fact but they are social before the fact too.

HORVATH: Paul, you’ve mentioned how algorithms are kind of opaque, but yet you also mention that we need to make them accountable, submit them to some sort of scrutiny. So how do we go about that?

DOURISH: This is a real challenge that a number of people have been raising in the last couple of years and perhaps especially in light of the flash crash, that moment where algorithmic processing produced a massive loss of value on the US stock market. There are a number of calls for corporations to make visible aspects of their own algorithms and processing so that it can be deemed to be fair and above board. If you just think for a moment about how much of our daily life in the commercial sector is indeed governed by those algorithms and what kind of impact a Google search result ordering algorithm has; there’s lots of consequences there, so people have called for some of those to be more open.

People have also called for algorithms to be fixed. This is one of the other difficulties is that algorithms shift and change; corporations naturally change them around. There was some outrage when Facebook indulged in an experiment in order to see whether they could tweak the algorithms to give people happier or less happy results and see if that actually changed their own mood and what kinds of things they saw. People were outraged at the idea that Facebook would tweak an algorithm that they felt, even though it obviously belonged to Facebook, was actually an important part of their lives. So keeping algorithms fixed in some sense is one sort of argument that people have made, and opening things up to scrutiny.

But the problem with opening things up to scrutiny is well, first, who can actually evaluate these things? Not all of us can. And also of course that in the context of machine learning, the algorithm identifies patterns in data, but what’s the dataset that we’re operating over? In fact, we can’t even really identify what those things are, we’re only saying there’s a statistical pattern and that some human being is going to come along and assign some kind of value to that. So some of the algorithms are inherently inscrutable. The algorithm processes data and we can say what it says about the data, but if we don’t know what the data is and we don’t know what examples it’s been trained on and so forth, then we can’t really say what the overall effect and impact is.

HORVATH: Will scrutiny of algorithms, whether we audit or control them, be affected by, say, intellectual property laws?

DOURISH: Well, this is a very murky area, and in particular it’s a murky area internationally, where there are lots of different standards in different countries about what kind of things can be patented, controlled and licensed and so forth. Algorithms themselves are patentable objects. Many people choose to patent their algorithms, but of course patenting something requires disclosing it and so lots of other corporations decide to protect their algorithms as trade secrets, which are basically just things you don’t tell anybody.

The question that we can ask about algorithms is actually also how they move around in the world and those intellectual property issues, licensing rights, patenting and so forth are actually ways that algorithms might be fixed in one place within a particular corporate boundary but also move around in the world. So no one has really I think got a good handle on the relationship between algorithms and intellectual property.

They are clearly valuable intellectual property, they get licensed in a variety of ways, but this is again one of these places where the relationship between algorithm and code is a kind of complicated one. We have developed an understanding of how to manage those things for code; we have a less good understanding right now of how to manage those things for algorithms. I should probably say, since we’re also talking about data, no idea at all about how to do this for data.

HORVATH: These algorithms, they’ve really got a phantom-like presence and yet they’ve got so much potential and possibility. They are practical tools that help with our lives. But what are the consequences of further depending upon the algorithms in our world?

DOURISH: I think it’s inevitable and not really problematic. From my perspective, algorithms in and of themselves are not necessarily problematic objects. Again, if we say that even the things that we teach our children for how to do multiplication are algorithms, there’s no particular problem about depending on that. I think again it’s the entwining of algorithms and data, and one of the things that an algorithmic world demands is the production of data over which those algorithms can operate, and all the questions about ownership and about where that algorithmic processing happens matter.

For example, one manifestation of an algorithmic and data-driven world is one in which you own all your data and you do the algorithmic processing and then make available the results if you so choose. Another version of that algorithmic and [data-centred/data-central] world is one in which somebody else collects data about you and they do all the processing and then they tell you the results, and there’s a variety of steps in between. So I don’t think the issue is necessarily about algorithms and how much we depend on algorithms. Some people have claimed we’re losing our own ability to remember things because now Google is remembering for us.

HORVATH: It’s an outsourced memory.

DOURISH: Yes, that’s right, or there’s lots of things about people using their Satnav and driving into the river, right, because they’re not anymore remembering how to actually drive down the road or evaluate the things in front of them, but I’m a little sceptical about those. I do think the question about how we want to harness the power of algorithmic processing, how we want to make it available to people, and how it should inter-function with the data that might be being collected from or about people, those are the questions that we need to try to have a conversation about.

HORVATH: Paul, I have to ask you, just like we use our brain to understand our brain, can we use algorithms to understand and scrutinise algorithms?

DOURISH: [Laughs] Well, we can and actually, we do. One of the ways in which we do already is that when somebody develops a new machine learning algorithm we have to evaluate how well it does. We have to know is this algorithm really reliably identifying things. We sort of pit algorithms against each other to try to see whether the algorithm is doing the right work and evaluate the results of other kinds of algorithms. So that actually already happens.

Similarly, as I suggested on the internet, the algorithm for congestion control is really a series of different algorithms happening in different places that work cooperative or not in order to produce or not a smooth flow of data. Though we don’t have to worry just yet I think about a sort of war between the algorithms or any kind of algorithmic singularity.

HORVATH: Paul, what do you mean by the singularity? Is this really a Skynet moment?

DOURISH: Well, the singularity is this concept that suggests that at some point in the development of intelligent systems, they may become so intelligent that they can design their own future versions and the humans become irrelevant to the process of development. It’s a scary notion; it’s one I’m a little sceptical about, and I think actually the brittleness of contemporary algorithms is a great example of why we’re not going to get there within any short time.
I think the question though is still how do we want to understand the relationship between algorithms and the data over which they operate? A great example is IBM’s Watson, which a couple of years ago won the Jeopardy TV show, and this was a real breakthrough for artificial intelligence. But on the other hand you’ve got to task, what is it that Watson knows about? Well, a lot of what Watson knows it knows from Wikipedia and I’m not very happy when my students cite Wikipedia and I’m not terribly sure that I need to be afraid of the machine intelligence singularity that also is making all its inferences on the basis of Wikipedia.

HORVATH: Paul, thanks for being our guest on Up Close and allowing us to glimpse into the world of the mysterious algorithm. I feel like I’ve been in the movie Tron.

DOURISH: [Laughs] Yes, well, we don’t quite have the glowing light suits unfortunately.

HORVATH: We’ve been speaking about the social lives of algorithms with Paul Dourish, a professor of informatics in the Donald Bren School of Information Computer Science at UC Irvine. You’ll find a full transcript and more info on this and all our episodes on the Up Close website. Up Close is a production of the University of Melbourne, Australia. This episode was recorded on 22 February 2016. Producer was Eric van Bemmel and audio recording by Gavin Nebauer. Up Close was created by Eric van Bemmel and Kelvin Param. I’m Dr Andi Horvath. Cheers.

VOICEOVER: You’ve been listening to Up Close. For more information visit upclose.unimelb.edu.au. You can also find us on Twitter and Facebook.

– Copyright 2016, the University of Melbourne.

Podcast Credits

Host: Dr Andi Horvath
Producer: Eric van Bemmel
Audio Engineer: Gavin Nebauer
Voiceover: Louise Bennet
Series Creators: Kelvin Param, Eric van Bemmel

This podcast was first published on 11 March 2016 by The University of Melbourne’s Up Close. Listen to the original podcast here

Four things to protect yourself from cyberattack

It’s easy to get lost in a sea of information when looking at cybersecurity issues – hearing about hacks and cyberattacks as they happen is a surefire way to feel helpless and totally disempowered.

What follows is a sort of future shock, where we become fatalistic about the problem. After all, 86% of organisations from around the world surveyed by PwC reported exploits of some aspect of their systems within a one year period. That represented an increase of 38% on the previous year.

However, once the situation comes into focus, the problem becomes much more manageable. There are a range of things that can we can easily implement to reduce the risk of an incident dramatically.

For example, Telstra estimates that 45% of security incidents are the result of staff clicking on malicious attachments or links within emails. Yet that is something that could be fairly easily fixed.

Confidence gap

There is currently a gap between our confidence in what we can do about security and the amount we can actually do about it. That gap is best filled by awareness.

Many organisations, such as the Australian Centre for Cyber Security, American Express and Distil Networks provide basic advice to help us cope with future shock and start thinking proactively about cybersecurity.

The Australia Signals Directorate (ASD) – one of our government intelligence agencies – also estimates that adhering to its Top Four Mitigation Strategies would prevent at least 85% of targeted cyberattacks.

So here are some of the top things you can do to protect yourself from cyberattack:

1 Managed risk

First up, we need to acknowledge that there is no such thing as perfect security. That message might sound hopeless but it is true of all risk management; some risks simply cannot be completely mitigated.

However, there are prudent treatments that can make risk manageable. Viewing cybersecurity as a natural extension of traditional risk management is the basis of all other thinking on the subject, and a report by CERT Australia states that 61% of organisations do not have cybersecurity incidents in their risk register.

ASD also estimates that the vast majority of attacks are not very sophisticated and can be prevented by simple strategies. As such, think about cybersecurity as something that can managed, rather than cured.

2 Patching is vital

Patching is so important that ASD mentions it twice on its top four list. Cybersecurity journalist Brian Krebs say it three times: “update, update, update”.

Update your software, phone and computer. As a rule, don’t use Windows XP, as Microsoft is no longer providing security updates.

Updating ensures that known vulnerabilities are fixed and software companies employ highly qualified professionals to develop their patches. It is one of the few ways you can easily leverage the cybersecurity expertise of experts in the field.

3 Restricting access means restricting vulnerabilities

The simple rule to protect yourself from cyberattack is: don’t have one gateway for everything. If all it takes to get into the core of a system is one password, then all it takes is one mistake for the gate to be opened.

Build administrator privileges into your system so that people can only use what they are meant to. For home businesses it could mean something as simple as having separate computers for home and work, or not giving administrator privileges to your default account.

It could also be as simple as having a content filter on employee internet access so they don’t open the door when they accidentally click on malware.

4 Build permissions from the bottom up

Application whitelisting might sound complicated, but what it really means is “deny by default”: it defines, in advance, what is allowed to run and ensures that nothing else will.

Most people think of computer security as restricting access, but whitelisting frames things in opposite terms and is therefore much more secure. Most operating systems contain whitelisting tools that are relatively easy to use. When used in conjunction with good advice, the result is a powerful tool to protect a network.

The Australian Signals Directorate released a video in 2012 with an overview of cyber threats.

Protect yourself from cyberattack: Simple things first

Following these basic rules covers the same ground as ASD’s top four mitigation strategies and substantially lowers vulnerability to protect yourself from cyberattack. If you want to delve deeper, there are more tips on the ASD site.

There are many debates that will follow on from this, such as: developing a national cybersecurity strategy; deciding if people should have to report an incident; the sort of insurance that should be available; what constitutes a proportionate response to an attack; and a whole range of others.

Each of those debates is underpinned by a basic set of information that needs to be implemented first. Future shock is something that can be overcome in this space, and there are relatively simple measures that can be put into place in order to make us more secure. Before embarking on anything complicated, you should at least get these things right to protect yourself from cyberattack.

This article was first published by The Conversation on 16 October 2015. Read the original article here.

A Remarkable Career

Compelled to move to Perth in 1972 because “there were no meaningful jobs in geoscience in the UK at the time”, John Curtin Distinguished Professor Simon Wilde carved out an illustrious career in the decades that followed his PhD at the University of Exeter.

“My work is largely focused on Precambrian geology, divided between Northeast Asia, the Middle East, India and Western Australia,” explains Wilde, from the Department of Applied Geology at Curtin University. In 2001, Wilde received extensive media attention for his discovery of the oldest object ever found on Earth – a tiny 4.4 billion-year-old zircon crystal dug up in the Jack Hills region of Western Australia.

His zircon expertise and vast knowledge of early-Earth crustal growth and rock dating have taken him to many of the key areas in the world where Archean (more than 2.5 billion-year-old) rocks are exposed. Of these international investigations, perhaps the most impressive have been his contributions to understanding the geology of North China. Part of the first delegation of foreign researchers to visit the Aldan Shield in Siberia in 1988, along with several top Chinese geoscientists, Wilde has since fostered friendships and collaborations with colleagues in five top Chinese universities, as well as the Chinese Academy of Sciences and the Chinese Academy of Geological Sciences.

“I have been to China more than 100 times and published more than 100 papers on Chinese geology, including major reviews of the North China Craton and the Central Asian Orogenic Belt, where I am a recognised expert.”

The Institute for Geoscience Research (TIGeR) at Curtin University is designated as a high-impact Tier 1 centre – the most distinguished research grouping within the university – providing a focus for substantial activity across a specific field of study. Wilde stepped down as Director in February 2015, having championed TIGeR research, provided advice and allocated funding for the eight years since the Institute was formed. He is confident that his research and the foundations he has built for the centre will continue to support innovative geoscience and exciting collaboration initiatives – in which he is certain to continue playing a major part.

Ben Skuse

Celebrating Australian succcess

Success lay with the University of Melbourne, which won Best Commercial Deal for the largest biotech start-up in 2014; the Melbourne office of the Defence Science and Technology Group, which won Best Creative Engagement Strategy for its ‘reducing red tape’ framework; and Swinburne University for the People’s Choice Award.

“These awards recognise research organisations’ success in creatively transferring knowledge and research outcomes into the broader community,” said KCA Executive Officer, Melissa Geue.

“They also help raise the profile of research organisations’ contribution to the development of new products and services which benefit wider society and sometimes even enable companies to grow new industries in Australia.”

Details of the winners are as follows:

The Best Commercial deal is for any form of commercialisation in its approach, provides value-add to the research institution and has significant long term social and economic impact:

University of Melbourne – Largest bio tech start-up for 2014

This was for Australia’s largest biotechnology deal in 2014 which was Shire Plc’s purchase of Fibrotech Therapeutics P/L – a University of Melbourne start-up – for US$75 million upfront and up to US$472m in following payments. Fibrotech develops novel drugs to treat scarring prevalent in chronic conditions like diabetic kidney disease and chronic kidney disease. This is based on research by Professor Darren Kelly (Department of Medicine St. Vincent’s Hospital).

Shire are progressing Fibrotech’s lead technology through to clinical stages for Focal segmental glomerulosclerosis, which is known to affect children and teenagers with kidney disease. The original Fibrotech team continues to develop the unlicensed IP for eye indications in a new start-up OccuRx P/L.

Best Creative Engagement Strategy showcases some of the creative strategies research organisations are using to engage with industry partner/s to share and create new knowledge:

Defence Science and Technology Group –Defence Science Partnerships (DSP) reducing red tape with a standardised framework

The DSP has reduced transaction times from months to weeks with over 300 agreements signed totalling over $16m in 2014-15. The DSP is a partnering framework between the Defence Science Technology Group of the Department of Defence and more than 65% of Australian universities. The framework includes standard agreement templates for collaborative research, sharing of infrastructure, scholarships and staff exchanges, simplified Intellectual Property regimes and a common framework for costing research. The DSP was developed with the university sector in a novel collaborative consultative approach.

The People’s Choice Awards is open to the wider public to vote on which commercial deal or creative engagement strategy project deserves to win. The winner this year, who also nabbed last years’ award is:

Swinburne University of Technology – Optical data storage breakthrough leads the way to next generation DVD technology – see DVDs are the new cool tech

Using nanotechnology, Swinburne Laureate Fellowship project researchers Professor Min Gu, Dr Xiangping Li and Dr Yaoyu Cao achieved a breakthrough in data storage technology and increased the capacity of a DVD from a measly 4.7 GB to 1,000 TB. This discovery established the cornerstone of a patent pending technique providing solutions to the big data era. In 2014, start-up company, Optical Archive Inc. licensed this technology. In May 2015, Sony Corporation of America purchased the start-up, with knowledge of them not having any public customers or a final product in the market. This achievement was due to the people, the current state of development and the intellectual property within the company.

This article was shared by Knowledge Commercialisation Australia on 11 September 2015. 

From science fiction to reality: the dawn of the biofabricator

 

“We can rebuild him. We have the technology.”
– The Six Million Dollar Man, 1973

Science is catching up to science fiction. Last year a paralysed man walked again after cell treatment bridged a gap in his spinal cord. Dozens of people have had bionic eyes implanted, and it may also be possible to augment them to see into the infra-red or ultra-violet. Amputees can control bionic limb implant with thoughts alone.

Meanwhile, we are well on the road to printing body parts.

We are witnessing a reshaping of the clinical landscape wrought by the tools of technology. The transition is giving rise to a new breed of engineer, one trained to bridge the gap between engineering on one side and biology on the other.

Enter the “biofabricator”. This is a role that melds technical skills in materials, mechatronics and biology with the clinical sciences.


21st century career

If you need a new body part, it’s the role of the biofabricator to build it for you. The concepts are new, the technology is groundbreaking. And the job description? It’s still being written.

It is a vocation that’s already taking off in the US though. In 2012, Forbes rated biomedical engineering (equivalent to biofabricator) number one on its list of the 15 most valuable college majors. The following year, CNN and payscale.com called it the “best job in America”.

These conclusions were based on things like salary, job satisfaction and job prospects, with the US Bureau of Labour Statistics projecting a massive growth in the number of biomedical engineering jobs over the next ten years.

Meanwhile, Australia is blazing its own trail. As the birthplace of the multi-channel Cochlear implant, Australia already boasts a worldwide reputation in biomedical implants. Recent clinical breakthroughs with an implanted titanium heel and jawbone reinforce Australia’s status as a leader in the field.

The Cochlear implant has brought hearing to many people. Dick Sijtsma/Flickr, CC BY-NC
The Cochlear implant has brought hearing to many people. Dick Sijtsma/Flickr, CC BY-NC

I’ve recently helped establish the world’s first international Masters courses for biofabrication, ready to arm the next generation of biofabricators with the diverse array of skills needed to 3D print parts for bodies.

These skills go beyond the technical; the job also requires the ability to communicate with regulators and work alongside clinicians. The emerging industry is challenging existing business models.


Life as a biofabricator

Day to day, the biofabricator is a vital cog in the research machine. They work with clinicians to create a solution to clinical needs, and with biologists, materials and mechatronic engineers to deliver them.

Biofabricators are naturally versatile. They are able to discuss clinical needs pre-dawn, device physics with an electrical engineer in the morning, stem cell differentiation with a biologist in the afternoon and a potential financier in the evening. Not to mention remaining conscious of regulatory matters and social engagement.

Our research at the ARC Centre of Excellence for Electromaterials Science (ACES) is only made possible through the work of a talented team of biofabricators. They help with the conduits we are building to regrow severed nerves, to the electrical implant designed to sense an imminent epileptic seizure and stop it before it occurs, to the 3D printed cartilage and bone implants fashioned to be a perfect fit at the site of injury.

As the interdisciplinary network takes shape, we see more applications every week. Researchers have only scratched the surface of what is possible for wearable or implanted sensors to keep tabs on an outpatient’s vitals and beam them back to the doctor.

Meanwhile, stem cell technology is developing rapidly. Developing the cells into tissues and organs will require prearrangement of cells in appropriate 3D environments and custom designed bioreactors mimicking the dynamic environment inside the body.

Imagine the ability to arrange stem cells in 3D surrounded by other supporting cells and with growth factors distributed with exquisite precision throughout the structure, and to systematically probe the effect of those arrangements on biological processes. Well, it can already be done.

Those versed in 3D bioprinting will enable these fundamental explorations.


Future visions

Besides academic research, biofabricators will also be invaluable to medical device companies in designing new products and treatments. Those engineers with an entrepreneurial spark will look to start spin-out companies of their own. The more traditional manufacturing business model will not cut it.

As 3D printing evolves, it is becoming obvious that we will require dedicated printing systems for particular clinical applications. The printer in the surgery for cartilage regeneration will be specifically engineered for the task at hand, with only critical variables built into a robust and reliable machine.

The 1970s TV show, Six Million Dollar Man, excited imaginations, but science is rapidly catching up to science fiction. Joe Haupt/Flickr, CC BY-SA
The 1970s TV show, Six Million Dollar Man, excited imaginations, but science is rapidly catching up to science fiction. Joe Haupt/Flickr, CC BY-SA

Appropriately trained individuals will also find roles in the public service, ideally in regulatory bodies or community engagement.

For this job of tomorrow, we must train today and new opportunities are emerging biofab-masters-degree. We must cut across the traditional academic boundaries that slow down such advances. We must engage with the community of traditional manufacturers that have skills that can be built upon for next generation industries.

Australia is also well placed to capitalise on these emerging industries. We have a traditional manufacturing sector that is currently in flux, an extensive advanced materials knowledge base built over decades, a dynamic additive fabrication skills base and a growing alternative business model environment.

– Gordon Wallace & Cathal D. O’Connell

This article was first published by The Conversation on 31 August 2015. Read the original article here.

Big data to solve global issues

Curtin University’s spatial sciences teams are using big data, advanced processing power and community engagement to solve social and environmental problems.

Advanced facilities and expertise at Perth’s Pawsey Supercomputing Centre support the Square Kilometre Array – a multi-array telescope due to launch in 2024 – and undertake high-end science using big data.

Individual computers at the $80 million facility have processing power in excess of a petaflop (one quadrillion floating point operations per second) – that’s 100,000 times the flops handled by your average Mac or PC.

Curtin University is a key participant in iVEC, which runs the Pawsey Centre, and a partner in the CRC for Spatial Information. As such, it is at the forefront of research efforts to use big data to solve global issues.

For instance, says the head of Curtin’s Department of Spatial Sciences Professor Bert Veenendaal, the university’s researchers are using Pawsey supercomputers to manage, compile and integrate growing volumes of data on water resources, land use, climate change and infrastructure.

“There is a rich repository of information and knowledge among the vast amounts of data captured by satellites, ground and mobile sensors, as well as the everyday usage information related to people carrying mobile devices,” he says.

“Increasing amounts of data are under-utilised because of a lack of knowhow and resources to integrate and extract useful knowledge,” he explains.

“Big data infrastructures coupled with increasing research in modelling and knowledge extraction will achieve this.”

Curtin’s projects include mapping sea-level rise and subsidence along the Western Australian coastline near Perth, generating high-resolution maps of the Earth’s gravity field and modelling climate over regional areas, such as Bhutan in South-East Asia, across multiple time scales.

Some research projects have the potential to expand and make use of big data in the future, particularly in the area of community development.

In one such project, the team worked with members of a rural community in the Kalahari Desert, Botswana, to collect information and map data using geographic information science. 

This helped the local community to determine the extent of vegetation cover in their local area, water access points for animals and how far the animals travelled from the water points to food sources.

Using this data, one local woman was able to create a goat breeding business plan to develop a herd of stronger animals. 

According to Veenendaal, there is potential for big data to be used for many regional and national issues. 

“Projects like this have the potential to provide data acquisition, analysis and knowledge that will inform intelligent decision-making about land use and community development on local, regional and national scales,” he says.

While procuring more funding for the Botswana project, Curtin’s researchers are planning future big data projects, such as applying global climate change models to regional areas across multiple time scales, and bringing together signals from multiple global navigation satellite systems, such as the USA’s GPS, China’s BeiDou and the EU’s Galileo. – Laura Boness

www.curtin.edu.au

www.crcsi.com.au 

www.ivec.org

Australia could lead in cybersecurity research

This article is part of The Conversation’s series on the Science and Research Priorities recently announced by the Federal Government. You can read the introduction to the series by Australia’s Chief Scientist, Ian Chubb, here.


Alex Zelinsky

Chief Defence Scientist, Defence Science and Technology

The national science and research priorities have been developed with the goal of maximising the national benefit from research expenditure, while strengthening our capacity to excel in science and technology.

Cybersecurity has been identified as a research priority due to Australia’s increasing dependence on cyberspace for national well-being and security. Cyberspace underpins both commercial and government business; it is globally accessible, has no national boundaries and is vulnerable to malicious exploitation by individuals, organised groups and state actors.

Cybersecurity requires application of research to anticipate vulnerabilities, strengthen cyber systems to ward off attacks, and enhance national capability to respond to, recover from, and continue to operate in the face of a cyber-attack.

Cyberspace is a complex, rapidly changing environment that is progressed and shaped by technology and by how the global community adopts, adapts and uses this technology. Success in cyberspace will depend upon our ability to “stay ahead of the curve”.

Research will support the development of new capability to strengthen the information and communications systems in our utilities, business and government agencies against attack or damage. Investment will deliver cybersecurity enhancements, infrastructure for prototype assessment and a technologically skilled workforce.

Accordingly, priority should be given to research that will lead to:

  1. Highly secure and resilient communications and data acquisition, storage, retention and analysis for government, defence, business, transport systems, emergency and health services
  2. Secure, trustworthy and fault-tolerant technologies for software applications, mobile devices, cloud computing and critical infrastructure
  3. New technologies for detection and monitoring of vulnerabilities and intrusions in cyber infrastructure, and for managing recovery from failure. Alex Zelinsky is Chief Defence Scientist at Defence Science and Technology Organisation.
Cybersecurity is becoming an increasingly important area for research in Australia.
Cybersecurity is becoming an increasingly important area for research in Australia.

Andrew Goldsmith
Director of the Centre for Crime Policy and Research, Flinders University

Sensible science and research on cybersecurity must be premised upon informed, rather than speculative, “what if”, analysis. Researchers should not be beholden to institutional self-interest from whichever sector: government; business; universities; or security/defence agencies.

We need to be clear about what the cybersecurity threat landscape looks like. It is a variable terrain. Terms such as “cyber-terrorism” tend to get used loosely and given meanings as diverse as the Stuxnet attack and the use of the internet by disenchanted converts to learn how to build a pipe bomb.

We need to ask and answer the question: who has the interest and the capability to attack us and why?

References to “warfare” can be misleading. A lot of what we face is not “war” but espionage, crime and political protest. More than two decades into the lifecycle of the internet, we have not yet had an electronic Pearl Harbour event.

Cybersecurity depends upon human and social factors, not just technical defences. We need to know our “enemies” as well as ourselves better, in addition to addressing technical vulnerabilities.

We should be sceptical about magic bullet solutions of any kind. Good defences and secure environments depend upon cooperation across units, a degree of decentralisation, and built-in redundancy.

Andrew Goldsmith is Strategic Professor of Criminology at Flinders University.


Jodi Steel
Director, Security Business Team at NICTA

Cybersecurity is an essential underpinning to success in our modern economies.

It’s a complex area and there are no magic bullet solutions: success requires a range of approaches. The national research priorities for cybersecurity highlight key areas of need and opportunity.

The technologies we depend on in cyberspace are often not worthy of our trust. Securing them appropriately is complex and often creates friction for users and processes. Creation of secure, trustworthy and fault-tolerant technologies – security by design – can remove or reduce security friction, improving overall security posture.

Australia has some key capabilities in this area, including cross-disciplinary efforts.

The ability to detect and monitor vulnerabilities and intrusions and to recover from failure is critical, yet industry reports indicate that the average time to detect malicious or criminal attack is around six months. New approaches are needed, including improved technological approaches as well as collaboration and information sharing.

Success in translating research outcomes to application – for local needs and for export – will be greater if we are also able to create an ecosystem of collaboration and information sharing, especially in the fast-moving cybersecurity landscape.

Jodi Steel is Director, Security Business Team at NICTA.


Vijay Varadharajan
Director, Advanced Cyber Security Research Centre at Macquarie University

Cyberspace is transforming the way we live and do business. Securing cyberspace from attacks has become a critical need in the 21st century to enable people, enterprises and governments to interact and conduct their business. Cybersecurity is a key enabling technology affecting every part of the information-based society and economy.

The key technological challenges in cybersecurity arise from increased security attacks and threat velocity, securing large scale distributed systems, especially “systems of systems”, large scale secure and trusted data driven decision making, secure ubiquitous computing and pervasive networking and global participation.

In particular, numerous challenges and opportunities exist in the emerging areas of cloud computing, Internet of Things and Big Data. New services and technologies of the future are emerging and likely to emerge in the future in the intersection of these areas. Security, privacy and trust are critical for these new technologies and services.

For Australia to be a leader, it is in these strategic areas of cybersecurity that it needs to invest in research and development leading to new secure, trusted and dependable technologies and services as well as building capacity and skills and thought leadership in cybersecurity of the future.

Vijay Varadharajan is Director: Advanced Cyber Security Research Centre at Macquarie University.

Cybercrime is a growing problem, and it'll take concerted efforts to prevent it escalating further. Brian Klug/Flickr, CC-BY NC
Cybercrime is a growing problem, and it’ll take concerted efforts to prevent it escalating further. Brian Klug/Flickr, CC-BY NC

Craig Valli
Director of Security Research Institute at Edith Cowan University

ICT is in every supply chain or critical infrastructure we now run for our existence on the planet. The removal or sustained disruption of ICT as a result of lax cybersecurity is something we can no longer overlook or ignore.

The edge between cyberspace and our physical world is blurring with destructive attacks on physical infrastructure already occurring. The notion of the nation state, and its powers and its abilities to cope with these disruptions, are also significantly being challenged.

The ransacking of countries’ intellectual property by cyber-enabled actors is continuing unabated, robbing us of our collective futures. These are some of the strong indicators that currently we are getting it largely wrong in addressing cybersecurity issues. We cannot persist in developing linear solutions to network/neural security issues presented to us by cyberspace. We need change.

The asymmetry of cyberspace allows a relatively small nation state to have significant advantage in cybersecurity, Israel being one strong example. Australia could be the next nation, but not without significant, serious, long-term, collaborative investments by government, industry, academy and community in growing the necessary human capital. This initiative is hopefully the epoch of that journey.

Craig Valli is Director of Security Research Institute at Edith Cowan University.


Liz Sonenberg
Professor of Computing and Information Systems, and Pro Vice-Chancellor (Research Collaboration and Infrastructure) at University of Melbourne

There are more than two million actively trading businesses in Australia and more than 95% have fewer than 20 employees. Such businesses surely have no need for full-time cybersecurity workers, but all must have someone responsible to make decisions about which IT and security products and services to acquire.

At least historically, new technologies have been developed and deployed without sufficient attention to the security implications. So bad actors have found ways to exploit the resulting vulnerabilities.

More research into software design and development from a security perspective, and research into better tools for security alerts and detection is essential. But such techniques will never be perfect. Research is also needed into ways of better supporting human cyberanalysts – those who work with massive data flows to identify anomalies and intrusions.

New techniques are needed to enable the separation of relevant from irrelevant data about seemingly unconnected events, and to integrate perspectives from multiple experts. Improving technological assistance for humans requires a deep understanding of human cognition in the complex, mutable and ephemeral environment of cyberspace.

The cybersecurity research agenda is thus only partly a technical matter: disciplines such as decision sciences, organisational behaviour and international law all must play a part.

Liz Sonenberg is Professor, Computing and Information Systems, and Pro Vice-Chancellor (Research Collaboration and Infrastructure) at University of Melbourne.


Sven Rogge
Professor of Physics and Program Manager at the Centre for Quantum Computation & Communication Technology at UNSW

Cybersecurity is essential for our future in a society that needs to safeguard information as much as possible for secure banking, safe transportation, and protected power grids.

Quantum information technology will transform data communication and processing. Here, quantum physics is exploited for new technologies to protect, transmit and process information. Classical cryptography relies on mathematically hard problems such as factoring which are so difficult to solve that classical computers can take decades. Quantum information technology allows for an alternative approach to this problem that will lead to a solution on a meaningful timescale, such as minutes in contrast to years. Quantum information technology allows for secure encoding and decoding governed by fundamental physics which is inherently unbreakable, not just hard to break.

Internationally, quantum information is taking off rapidly underlined by large government initiatives. At the same time there are commercial investments from companies such as Google, IBM, Microsoft and Lockheed Martin.

Due to long term strategic investments in leading academic groups Australia remains at the forefront globally and enjoys a national competitive advantage in quantum computing and cybersecurity. We should utilise the fact that Australia is a world leader and global player in quantum information science to provide many new high technology industries for its future.

Sven Rogge is Professor of Physics at UNSW Australia.

This article was originally published on The Conversation and shared by Edith Cowan University on 10 July 2015. Read the original article here.


Read more in The Conversation Science and Research Priorities series.

The future of manufacturing in Australia is smart, agile and green

On the road: research can improve transport across Australia

Research priority: make Australia’s health system efficient, equitable and integratedThe Conversation

Robot automates bacteria screening in wine samples

A robotic liquid handling system at the Australian Wine Research Institute (AWRI) is automating the screening of large numbers of malolactic bacteria strains.

Using miniaturised wine fermentations in 96-well microplates, the Tecan EVO 150 robotic system is screening bacteria for MLF efficiency and response to wine stress factors such as alcohol and low pH.

The bacteria are sourced from the AWRI’s wine microorganism culture collection in South Australia and elsewhere.

The robot can prepare and inoculate multiple combinations of bacteria strains and stress factors in red or white test wine, and then analyse malic acid in thousands of samples over the course of the fermentation.

In one batch, for example, 40 bacteria strains can be screened for MLF efficiency and response to alcohol and pH stress in red wine, with over 6000 individual L-malic acid analyses performed.

The AWRI says that this high-throughput approach provides a quantum leap in screening capabilities compared to conventional MLF testing methods and can be applied to a range of other research applications.

Additionally, the phenotypic data obtained from this research is being further analysed with genomic information, which will identify potential genetic markers for the stress tolerances of malolactic strains.

First published at foodprocessing.com.au on 22 July. Read the original article here.

This article was also published by The Lead on 22 July 2015. Read the article here.

Shark detection

Sharks have an incredible sense of smell, but it is their sense of hearing that could be one of the keys to protecting people at beaches, says a team of researchers led by Dr Christine Erbe from Curtin University’s Centre for Marine Science and Technology.

“We had this idea of trying to figure out what acoustic signatures humans make, whether the sharks can hear them, and, if appropriate, whether we can somehow interrupt that,” says Erbe. These interruptions could then potentially be used to ‘hide’ or ‘mask’ the noises people make in the water from the sharks.

Western Australia is a pertinent place to work on this project, given the debate over baited drum lines to cull sharks, and the project has been funded by Western Australia’s Department of Commerce.

Initial recordings have been made of people in a pool swimming and snorkelling past a hydrophone – a microphone designed to record or listen to underwater sound. Erbe’s team records people swimming and surfing at beaches to see how far their noises travel. These sounds can then be played to sharks in enclosures at Ocean Park Aquarium in Shark Bay to check for any responses.

“If we see responses from the sharks, the next step is to figure out if we can mask the sounds of people in the water using artificial signals,” says Erbe. These artificial signals are band-limited white noise, created digitally. “We can see which frequencies, or part of the human sound signature, could be detected by the sharks and calculate the range limits at which that might occur. We can then design masking signals that fill in around them so those frequencies can’t be detected,” she says. The team will test these masking signals by playing them back to the sharks at Ocean Park Aquarium.

The outline of a shark shows clearly on a scanner used by the Curtin team.
The outline of a shark shows clearly on a scanner used by the Curtin team.

This masking technique is different to other approaches where loud sounds are played at beaches to scare sharks away. The problem with the loud sound approach, says Erbe, is that it potentially interferes with an entire underwater ecosystem. The masking approach, on the other hand, is targeted at frequencies and levels that only sharks can hear in the surf zone. “We’re not looking at scaring the sharks away, we’re just limiting them from detecting humans,” she says.

According to Erbe, a multidisciplinary approach is crucial to solving problems such as shark mitigation, and her team ranges from physicists to acousticians, engineers and marine biologists.

Team member Dr Miles Parsons is leading another project on the sonar detection of sharks with the aim of building an early warning system. “The solution will have to be a combination of detecting sharks and preventing them detecting us,” says Erbe.

Ruth Beran

cmst.curtin.edu.au

Award-winning app boosts mental health help for youth

You are 16 years old and have a secret, which you’ve been carrying around for what feels like your whole life. You feel trapped so you turn to marijuana and alcohol to numb the pain. Your grades begin to slip and your parents are worried so they send you to a psychologist. During your first visit, the clinician in the waiting room starts asking questions, and all you can hear is your heartbeat ringing in your ears.

When it comes to receiving effective mental health treatment, early diagnosis and non-judgmental support are essential. In order to assess what types of treatment options are available, many clinicians start with a verbal assessment. However this verbal assessment is a barrier for many young people, preventing treatment. Psychologist and PhD candidate Sally Bradford recognised that young people between the ages of 12­­–25 could benefit from a different kind of assessment.

“They’re going into an environment where they’re expected to verbally relay everything that is going on in their lives – to tell their deepest, darkest secrets that they may have never said out loud before,” Bradford says. “It can take a long time for them to find the words – especially if the clinician doesn’t ask the right questions,” she says.

As part of her PhD focusing on the use of technology in face-to-face mental health care with young people, Bradford created the electronic psychosocial assessment app called “myAssessment” that helps clinicians evaluate young people quickly and easily. Speaking to the National Mental Health Commission’s review of Australia’s mental health system, this new screening process underscored the need to improve health services and support through innovative technologies.

“The app could be beneficial in any field where you’re needing groups of people to be truthful, and give answers in a way that they do not feel judged,” Bradford says.

Based on the strides Bradford made in youth mental health with the invention of myAssessment, she was awarded the $5000 top prize at the CRC Association Early Career Research Showcase at the CRCA’s Excellence in Innovation Awards Dinner in Canberra.
240615_mentalhealth7

The app was developed in close conjunction with the Young & Well CRC, youth focus groups and clinicians, and subsequently trialled at a headspace Centre in Canberra over nine months in 2014.

“The app was designed with significant input from young people and clinicians, and puts their needs and requirements first. For clinicians, it follows an evidence-based format and doesn’t require changes to the way they currently provide services. For young people, it’s interactive, engaging, and easy to use,” Bradford says.

240615_mentalhealth3

The way it works is a patient arrives for their appointment. Prior to seeing a clinician, patients complete myAssessment on an iPad in the waiting room. The app is a simple survey, but with a range of different response options. Topics include alcohol and drug habits, sexual preference, eating habits and anxiety and depression. Questions include screening and probing questions. Screening questions can be a yes or a no answer that prompts further questioning: Do you drink? Smoke? Have you tried or used drugs? What have you tried?

A probing question allows for a more comprehensive understanding of the issue, such as, how do you (and your friends) take them? (drugs). After answering and submitting these questions, a personalised ‘Clinician Summary’ details the patient’s risks and strengths, providing the clinician with a foundation for the first interview.

240615_mentalhealth5

Bradford’s trials proved to be particularly enlightening, with an 87% response rate, and ¾ of patients reporting that myAssessment provided them with an “accurate” representation of themselves. The results also showed that young people were up to 10 times more likely to open up about drug and alcohol use, sexuality, and self-harm when the application was used, in comparison to a verbal assessment with the same questions.

“There was a wealth of data generated over the course of the trial, which could be particularly useful for policy reform in the future,” Bradford says.

Kara Norton

Young & Well CRC 

Open your mind

Back in 1990, the internet was just a twinkle in the eye of a few scientists at The European Organization for Nuclear Research (CERN). Mobile phones were awkward bricks wielded by showy stockbrokers. Personal computers had not yet made the transition from the office to the home.

Fast forward 25 years, and more people have access to mobile phones than working toilets. Technology has revolutionised global communications, culture and business. Video chat software Skype has more than 300 million active users.

While three billion of us already have internet access, Google plans to supply the rest using high-altitude balloons (Project Loon) and solar powered drones (Project Titan) to beam wi-fi across developing nations.

Even language is no longer the barrier it used to be, with the advent
of real-time translation technologies enabling communication without a human translator. As of January 2015, we are using Google Translate to make one billion translations per day.

So what do the next 25 years have in store? “The general trend is that technology is becoming more and more a part of everyday life,” says Professor Rafael Calvo, a software engineer at the University of Sydney. While some are questioning how technology may be affecting us adversely, Calvo is researching how computers may
be able to contribute positively to our mental health. “Positive computing is changing the design of technologies to take into account the wellbeing and happiness of people,” he says.

For example, games have been designed to encourage ‘pro-social’ behaviours. In one study at Stanford, researchers built a game where players were either given the power to fly like Superman or take a virtual helicopter ride. After playing, the participants who had the superpower were more likely to help someone in need.

Though computers are traditionally seen to have a blindspot for emotions, recent advances are paving the way for computers to notice and adapt to our moods – a phenomenon called affective computing. “Some new cameras have a setting where they only take a photo when you smile,” says Calvo.

Calvo’s team has developed software to assist moderators of Australia’s leading online youth mental health service, ReachOut.com. It can detect when someone is depressed, and possibly at risk of suicide, and alert a human moderator. His group has also teamed up with the Young and Well CRC to build an online hub where young people can download apps to help improve their wellbeing.

For Calvo, this technology represents a transformation in how software is being made – aiming to improve wellbeing, not just productivity. “Our work is centred on influencing how people develop software. Australia leads the world in this field.”

New technologies could also change the way we learn, says Professor Judy Kay from the University of Sydney. Kay and her team are exploring the use of touchscreen tabletops in the classroom as tools for students to work together. They can also help teachers monitor each group’s work. “This technology can distinguish the actions and speech of each person in a group to determine how well the group is progressing and how well they collaborate,” she says.

The movie Her presents a future in which we will have intelligent virtual personal assistants to help organise our lives. We can already tell Siri to “Call Mum” or ask Google if we need an umbrella today. But this is only the beginning.

Meet Anna Cares. She’s a friendly brunette who lives inside your tablet or smartphone as an intelligent virtual agent. Developed by Clevertar (a spin-out from the computer science labs at Flinders University), Anna is being developed for the aged care space. She can already remind you to take your medication and give timely advice based on the weather.

Dr Martin Luerssen is an artificial intelligence specialist from Flinders who works on the project. He says intelligent assistant technology has been enabled by the convergence of several advances over the past 10 years, including astonishing progress in computational and sensing capabilities, as well as speech and language technologies. Meanwhile, affective computing approaches are bringing improvements to understanding human gestures and expressions.
“This enables us to create very natural, human-like interactions,” says Luerssen.

“By 2040, we expect that there will be more Australians retired than working – we cannot afford not to have this kind of technology,” adds Professor David Powers from Flinders.

We already use voice-operated technology, but now an app called Focus, developed by the Smart Services CRC, enables you to interact hands-free with a smartphone using eye movement alone – for example, you can increase font size with the blink of an eye.

“Australia leads the world in this field.”

By 2040, it is plausible we will be able to control computers with our minds using brain-computer interfaces (BCI), such as a cap covered in electrodes that can transmit brainwaves to a computer via electroencephalogram (EEG). In 2006, technology by BrainGate enabled patients with total ‘locked-in’ syndrome (where a patient is aware but cannot move or communicate verbally due to paralysis) to move a computer cursor just by thinking, thereby giving them a way to communicate. In 2010, Australian entrepreneur Tan Le unveiled a commercially available EEG headset, enabling anyone with careful concentration to give their computer simple instructions with their thoughts.

But the process is slow. “At the moment, typing with BCI can take seconds per character,” says Powers. Flinders University researchers are working on new technologies where users can type by thinking of words rather than just characters, speeding up the process.

In a field where the sudden emergence of a new technology can change the entire landscape in just a year or two, who knows how we will be communicating in 2040?

“One thing I can say with confidence is that we are very bad at predicting the future!” says Kay.

– Cathal O’Connell

youngandwellcrc.org.au

smartservicescrc.com.au

Drone used to drop beneficial bugs on corn crop

Photograph courtesy of Ausveg and Vegetables Australia

During his Summer Science Scholarship at UQ, Mr Godfrey investigated if drones could be used to spread the beneficial Californicus mite, a predatory mite which feeds on pest leaf eating mites onto crops infected with two spotted mites.

Godfrey said two spotted mites ate chlorophyll in leaves, reducing plant vigour and crop yield.

“As corn grows, it is very difficult to walk between the crop to spread beneficial bugs,” he said.

“A drone flying over the crop and distributing the insects from above is a much more efficient and cost-effective method.”

Godfrey began his project at the Agriculture and Remote Sensing Laboratory at UQ’s Gatton Campus, learning how drones function, before spending time at Rugby Farms to gain insight into potential uses for drones.

“I built a specific drone for the project, tailoring the number of propellers, stand, and size of the motor to suit the drone’s application,” he said.

“My initial concept for the ‘Bug Drone’ came from a seed spreader, and in the end I built an attachment to the drone that can be used to spread the mites over the crop from the air.”

2015-04-29_1605Initial designs using a cylinder-shaped container to hold the mites weren’t practical as it couldn’t hold enough of the predatory mites to make the process efficient.

“I used corflute material to make a large enough storage device for the mites,” Mr Godfrey said.

“The seed spreader then acts as the distributer as it has a small motor powering it.”

The device is controlled remotely from the ground.

“We’ve tested the product at Rugby Farms and I’ve successfully proved the concept that drones can be used to spread beneficial bugs,” Mr Godfrey said.

“There is still a lot of work to be done, but the most difficult part is to work out how to control the volume of bugs being distributed at the one time.

“The next step is to monitor the crops and to see what happens after the bugs have been dropped.

“Remote sensing with precision agriculture is an interesting field, and it has opened my eyes to the career opportunities in this field,” he said.

Students can study precision agriculture at The University of Queensland Gatton in a course run by Associate Professor Kim Bryceson who also manages the Agriculture and Remote Sensing Laboratory.

Data discoveries

In the environment, big data can be used to discover new resources, and monitor the health of the resources we rely on, such as clean water and air. ANSTO is at the forefront of big data analysis and precision modelling in environmental studies at both national and international scales.

Particle accelerators are used to analyse samples at a molecular level with extremely high precision. At ANSTO, they have been integral to identifying a potential water source in the Pilbara area in northern WA, as well as measuring air quality in Australian and Asian cities.

Despite its remoteness, the Pilbara contains major export centres, such as Port Hedland, which rely heavily on sustainable use of water. In March 2014, ANSTO’s Isotopes for Water project released the results of their investigation into water quality, sustainability and the age of groundwater in the arid Pilbara region, to determine its viability as a future water resource to support the growth of the area.

“A large, potentially sustainable resource was verified by using nuclear techniques,” explains Dr Karina Meredith of ANSTO, who leads the project investigating water sources. “The outcome of this seven-year study provides a greater degree of certainty of water supply for the Pilbara.”

By calculating the age of water, ANSTO researchers can determine whether it can be drawn off sustainably, and where replacement (known as ‘recharging’) will be sufficient to maintain reservoir levels. Levels of carbon-14 in groundwater decay naturally over time, and by measuring minute traces of this radiocarbon in the groundwater with ANSTO’s STAR accelerator, scientists like Meredith can tell how old the water is. “We’ve found it’s about 5000 years old, and what was really interesting is that one of the areas had waters that were approximately 40,000 years old,” says Meredith.

Her calculations show it will be OK to drink the 5000-year-old water, as the reservoir is sufficiently recharged by water from cyclones. The 40,000-year-old vintage won’t be flowing through kitchen taps, however, as this region isn’t recharged fast enough, she says.

190215-ansto-box
ANSTO’s particle accelerators are being used to analyse air pollution in cities such as Manila in the Philippines.

For more than a decade, Dr David Cohen of ANSTO has used the same accelerators to track down the sources of fine particle air pollution in Australian and Asian cities. Air pollution particles come in different sizes, but fine particles are the most damaging to human health – they penetrate deep into the lungs and have been linked to cardiovascular disease.

Cohen is the data coordinator of an international study of fine particle air pollution that takes samples in cities across 15 countries in Asia and Australasia. Combining the fingerprints detected using STAR with wind back trajectories, he’s shown that the air in Hanoi, for example, can contain dust from the Gobi Desert in Mongolia and pollution from Chinese coal-fired power stations some 500–1500 km away.

In addition, to reveal the sources of air pollution nationally, Cohen’s team has recently completed a study of the Upper Hunter region of NSW, which found significant fingerprints from domestic wood burning.

“In winter, up to 80% of the fine particles were coming from wood,” says Cohen. “So the most effective way to reduce winter air pollution would be to regulate burning wood.”

www.ansto.gov.au

Using algorithms to capture risk

IN THE HEALTH SECTOR, big data has been harnessed with remarkable success. One high-profile example is Google’s Flu Trends website, reported in a paper for the journal Nature in 2009 for accurately predicting the spread of epidemics based on the frequency of disease-related search queries.

Associate Professor Trish Williams, who heads the eHealth Research Group at Edith Cowan University in Joondalup, WA, says that unlike a lot of health research, projects using big data don’t focus on ‘cause and effect’. Instead, they tap into the huge potential of predictive analytics.

That’s an area where collaborative research can come to the fore, she says. Williams adds that big data research is most effective when done by cross-disciplinary teams who can both interpret information and present the findings to a broad audience.

“In health, it is really important that the semantics of the data are well-understood before you start analysing things,” she says. “You’ve also got to work out how to use some very big datasets, perhaps in ways that they weren’t necessarily intended to be used.”

“We’re working to improve the algorithms that detect what kind of problem the person has.”

This conundrum is very familiar to Associate Professor Jane Burns, CEO of the Young and Well CRC. When her team compared the results of a national survey that used ‘traditional’ computer-assisted telephone interviews with those from a similar Facebook survey, they expected both datasets would reveal similar trends.

“We found that the results were not similar at all; the internet results showed far higher levels of psychological distress,” she says, adding that there’s no sure way to work out which survey style had less bias. “Possibly, people are far more honest over the internet than they are over a telephone interview.”

Researching suicide indicators in social media is in its early stages, with researchers from the Young and Well CRC working with key industry partners such as Facebook, Twitter and Google.

“We’re trying to understand from a suicide prevention perspective, how we might be able to use big data to understand trends in the way in which people respond to things, to see if we can look to algorithms to capture some of the risks,” says Burns.

Twitter profile on Apple iPhone 5S
Social networking media holds a wealth of information on the public’s mental health.

With more than 500 million short messages going out through the Twitter network daily, Burns says that finding algorithms to uncover keywords for suicide risk is a huge challenge.

Included in the research is suicide contagion – where one suicidal act within a community increases the likelihood of more occurring. Burns says a key focus of their research around suicide contagion, as well as identifying early warning symptoms or signs, is initiating support networks.

Within the Young and Well CRC, Associate Professor Rafael Calvo of the University of Sydney is working to design tools that help moderators in online health-focused communities, such as youth mental health support service ReachOut.com, to provide appropriate feedback and support for their members.

Thousands of forum posts can be automatically processed, generating a report that prioritises more serious problems so moderators can respond immediately. The team has also developed suggested ‘intervention’ templates, which link to helpful resources.

“We have built the interface for the moderator, and we’re now working on improving the algorithms that detect what kind of problem the person has,” Calvo says.

One of the hopes for big data analysis is to uncover measurable biological indicators for devastating mental health disorders.
One of the hopes for big data analysis is to uncover measurable biological indicators for devastating mental health disorders.

Social media is just one of the big data examples in health. At the CRC for Mental Health, researchers are looking for biomarkers – measurable biological indicators that might enable early intervention for people at risk of Alzheimer’s disease, mood disorders, schizophrenia and Parkinson’s disease. Datasets include the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing, which has genomic information for more than 1500 people – some with normal cognitive function, others with mild cognitive impairment and others who have been diagnosed with Alzheimer’s disease.

Dr Noel Faux, a bioinformatician at the Florey Department of Neuroscience and Mental Health, says that the vast amounts of information already available include blood measurements of thousands of hormones and proteins. Cognitive and clinical assessments are also being gathered.

His team is working with software developer Arcitecta to help researchers capture clinical data on-site and feed it into a data repository that can be used by multiple research institutions.

www.youngandwellcrc.org.au
www.mentalhealthcrc.com
au.reachout.com

 

Tracking health

HealthTracks, a web-based tool built by the CRC for Spatial Information, has been used by researchers at Western Australia’s Department of Health to merge health data with spatially-based datasets. The aim is to identify populations at risk of disease and gaps in the location of essential health services.

So far, hospital and regional health data has been combined with public datasets via the WA Landgate Shared Land Information Platform. When rolled out nationally, the tool will include modular enhancements for the analysis of mental health, child health and environmental health data.

bit.ly/1klzui7

The data deluge

WHEN A POWERFUL magnitude 7 earthquake devastated the Republic of Haiti on 12 January 2010, more than 200,000 people were killed. Around 3 million people were affected by the earthquake and its aftershocks, which destroyed 250,000 homes and 30,000 commercial buildings.

Around 630,000 people left the chaos of Haiti’s capital, Port-au-Prince, in search of shelter, water and sanitation. Many of these people used Haiti’s four main mobile phone providers, via its 6 million mobile phone lines, to call friends or relatives in rural areas. Those calls enabled Swedish medical researchers at the Karolinska Institutet in Stockholm to track their movements and to identify areas at risk of potential cholera outbreaks.

The researchers worked with Haiti’s largest mobile phone operator, Digicel, to analyse the call history of 2 million mobile phone users, before and after the earthquake. The results, published in PLoS Medicine and Proceedings of the National Academy of Sciences in 2011 and 2012 respectively, found that “people seemed to have travelled to where they had significant social bonds and support”. More specifically, most Haitians fleeing Port-au-Prince went to the same locations where they had spent Christmas and New Year. The study showed that large-scale movements after earthquakes and other disasters are not chaotic, but often highly predictable, and could be used to improve the efficiency of aid distribution.

The Haiti research is an example of how big data analysis can be used for humanitarian purposes or ‘data philanthropy’ in developing countries. A May 2014 report by the Bill and Melinda Gates Foundation suggests that mobile phone data is “one of the only large-scale, digital data sources that touch large portions of low-income populations,” and if analysed “under proper protections and anonymisation protocols, it can be used to enhance the lives of poor people around the world”.

Victims survive in the aftermath of historic earthquake in Port-Au-Prince, Haiti January, 2010
A study of Haitian phone records showed that people’s movements after a disaster are often highly predictable.

MOBILE PHONES ARE just one source of big data in a world where global satellite navigation, online transactions, sensors, digital closed-circuit cameras, radar monitoring and aerial surveys using pre-programmed drones generate hundreds of exabytes (billions of gigabytes) of data a year. In 2010, The Economist published a series of features on ‘the data deluge’, warning that keeping up with this flood was difficult enough, but “analysing it, to spot patterns and extract useful information, is harder still”. A 2011 report by the McKinsey Global Institute estimated that the volume of global data was predicted to grow by 40% a year, but global spending on data information management was growing at just 5% annually. Technology researcher Gartner estimated that big data analytics drove US$28 billion of global IT spending in 2012, and predict expenditure will exceed US$230 billion in 2016.

In Australia, the Data to Decisions Cooperative Research Centre (D2D CRC) has received $25 million from the federal government and $62.5 million from industry and research participants to address big data challenges. It follows a review of the data analytics and management capacity of Australia’s public service, including defence and federal law enforcement agencies.

Based in Adelaide, the D2D CRC will focus on three research areas: data storage and management, analytics and decision support, and law and policy for big data analysis including issues such as privacy. Participants include Deakin University, the Australian Federal Police, the Attorney-General’s Department, the Department of Defence, the University of South Australia, the University of Adelaide, UNSW Australia, BAE Systems and SAS. The CRC will also develop research links with leading US universities and data analysts.

“We’re dealing with vast volumes of raw, unstructured data. For defence and national security, it’s like looking for the proverbial needle in a haystack.”

DR SANJAY MAZUMDAR, the CRC’s chief executive, says the bid to establish the $88 million research venture arose from discussions about future challenges to Australia’s national security and a shortage of skills in data intelligence applications and analytics. Australia urgently needs to build a skilled workforce to manage, extract and analyse data. The CRC aims to produce 48 PhD students across areas that include health care, IT, government services, law, manufacturing and defence intelligence systems. It will also train 1000 data scientists through its Education and Training Program, and work with universities to build on existing degrees in business and data analytics.

Australia’s defence and national security sectors face “the most imminent and complex” challenges from the global data deluge, Mazumdar says. British mathematician and global data analytics expert Clive Humby has described data as “the new oil”, but Mazumdar points out that, like oil, data needs to be processed to extract maximum benefit.

“We’re dealing with vast volumes of raw, unstructured data. For defence and national security agencies, it’s like looking for the proverbial needle in a haystack. In a time-critical situation, you need to be able to extract actionable intelligence from that data, and to do that, you need advanced data analysis programs that can process and filter that data quickly, accurately and efficiently,” he says.

Even when massive datasets have been processed and analysed, there’s still a need to cross-reference and present findings as visualisations – tables, charts, graphs, keywords and heat maps – that condense the data to manageable and easily assimilated information. Otherwise, Mazumdar says, we may be “drowning in data but starved for information”.

He explains that it’s not only defence and national security agencies that will benefit from expanding Australia’s skills in data analytics. “In mining, for example, the biggest costs are around exploratory drilling to obtain samples for analysis. There’s a possibility that geophysical data from satellite images could be used to pinpoint where deposits are likely to occur, and that could be immensely cost-saving.”

Mazumdar says the CRC will provide a variety of big data users – from government departments and utilities to universities and private industry – with the “tools, techniques and workforce to unlock the value of their data to make more informed and efficient decisions”.

Advanced machine learning and data retrieval systems are critical research areas for big data management. Mazumdar uses the example of image extraction during an attempted terrorist attack, when police and defence intelligence may need to analyse hours, days or even weeks of footage from closed-circuit cameras.

“If you can teach a computer to look for certain combinations of things in that high-volume data stream, you will get a faster result that will inform real-time decisions a lot more quickly,” he says.

The CRC will develop next-generation data storage and large-scale processing software from commercial open-source data management systems, such as Hadoop. Mazumdar says new systems of data mining and machine learning could reduce the time required to analyse high-volume data streams, including satellite imagery. It’s not a question of automating decisions via a machine, but of using data analytics to strip out non-essentials and collate relevant material.

“The human eye and brain are very good at processing complex information from images, but they wouldn’t cope with such a high volume of material. If we can teach a computer to look for certain combinations of things, like the shape of an aircraft, for example, the machine can sort the images in the data pipeline into a smaller, more manageable dataset.”

020215_data_img

THE RIGHT TO privacy and debate over who owns data generated by social media, mobile phones, ATMs and iPad apps, is a hotly-contested topic.

A data management issues paper was developed by the Australian Government Information Management Office (AGIMO) to identify and discuss privacy and security implications around the use of government agency data. The AGIMO estimates that 90% of data in the world today was generated over the past two years alone, and that this amount of data will be 44 times greater by 2020. But who should have right to use such data and under what circumstances, and what controls are appropriate to place on its use?

The AGIMO argues that private companies such as banks, online retailers, insurance companies and social media sites, including Twitter and Facebook, harvest huge volumes of customer data, which is analysed and used to create new client services. Government departments and agencies could also use data analytics to improve services, but they’re bound by a range of legislative controls relating to privacy, security and public trust. In Australia, they must obtain and use information according to the Privacy Act, the Telecommunications Act, Freedom of Information laws and others.
The Gates Foundation report gives an example of mobile phone data use that generated controversy. When health researchers at Harvard University obtained Kenyan mobile records to track the spread of malaria in 2012, it provoked a storm of protest from people who had unknowingly contributed to the study. The researchers had obtained anonymised records for every call and text message sent by 15 million Kenyan mobile phone subscribers over a year, and used the data to identify regions where malaria infection had originated to target medical aid more effectively.

Despite the humanitarian nature of the research and reassurances that callers could not be identified from data provided, Kenyan media claimed the study had breached privacy. The Gates Foundation report says the incident shows that “even with the best of intentions and adherence to rules”, researchers need to consider privacy issues when collecting data.

Professor Louis De Koker, of Deakin University’s School of Law, was founding director of the Centre for the Study of Economic Crime at the University of Johannesburg in South Africa. He will lead the D2D CRC’s Law and Policy program, which combines senior law and socio-legal researchers from the Deakin School of Law and UNSW Australia Law.

“Big data analysis challenges existing privacy principles because our current framework is built around the notion that you own your data and anyone who wants to use it needs your consent to do so,” he says. “It also assumes that data can be effectively de-identified, whereas data analytics can now enable re-identification.”

Social network and mobile media concept“In many cases, the data you generate from, for example, activity on social media sites or Google searches, may be analysed and produce more data and a deeper understanding about you or about communities of which you are a member. In addition, that data would often be stored in another country. What are the laws that apply to that secondary data, and how do you enforce any breaches of rights that you may have in relation to that data? How do we harness such data to improve national security while protecting Australians from abuse of their data? Those are the kinds of policy questions that need to be looked at.”

He says privacy concerns reflect a substantial increase in the volume of data being collected and social concerns and fears around being spied on.
“Unlike the days when you filled in a form and physically handed it to someone, people often don’t know what kind of data is being collected, how and when it’s being collected, who is using it to draw conclusions about you or which decisions by government or private companies relating to you are affected,” he says.

“These days, we take the presence of surveillance cameras very much for granted because they’re everywhere – they’re in shops, airports and at ATMs. But what are the implications when we combine these data sources with sophisticated data analysis? How can we harness the benefits of such data and protect society against abuse?”

In 2013, Shoalhaven City Council installed CCTV cameras in the NSW South Coast town of Nowra as part of a crime prevention program. The surveillance cameras were installed in public places, including shops, parking lots and parks. But a resident challenged their use and argued before the Administrative Decisions Tribunal that it was not the council’s role to collect evidence for the purpose of prosecuting crime. The tribunal upheld the resident’s complaint, ruling that council signage near the camera did not adequately inform people about privacy implications. It also ruled that the council had not established that filming people was “reasonably necessary” to prevent crime.

De Koker says there’s also debate around the adequacy of the protection afforded by giving people notice and gaining their consent to collect and use data, for example when customers sign online agreements relating to social media, software downloads and apps. A report to President Barack Obama in May 2014 on big data and privacy by the President’s Council of Advisors on Science and Technology stated that “each individual app, program or web service” is legally required to ask people to give consent for data collection practices. “Only in some fantasy world do users actually read these notices and understand their implications before clicking to indicate their consent,” the report says.

Mazumdar and De Koker say the D2D CRC will explore opportunities and challenges posed by high-volume data harvesting and analytics in consultation with legal and national security experts.

“All governments are grappling with this issue…There is a lot of good that can come from big data analysis, but we need to balance our expectations and concerns,” De Koker says.

“In the modern world, one of the biggest threats to both national security and personal privacy is a person sitting in a room with a laptop.”

THE ADVISORY COUNCIL’S report to President Obama suggests future generations, who will have grown up with digital technologies, “may see little threat in scenarios that individuals today would find threatening”. It describes a future in which “digital assistants”, in the form of data collection cameras, film a woman packing her suitcase for a business trip. The bag is placed outside for pick-up, with her digital assistants sending the delivery instructions. The suitcase won’t be stolen because a streetlight camera is watching it and every item inside it has a tiny electronic tag that can be tracked and found within minutes.

Her world possibly “seems creepy to us”, but she has “accepted a different balance among the public goods of convenience, privacy and security than most people would today,” the report says.

“In the modern world, one of the biggest threats to both national security and personal privacy is a person sitting in a room with a laptop,” says Mazumdar. “That threat will only grow as the world becomes increasingly reliant on digital technology. We’re already detecting sophisticated ways to hide data and connections online. We need to improve our national capacity to detect and respond to that hidden information, but also to ensure control of that capacity, to respect and protect the rights of users online.”

www.d2dcrc.com.au

Antarctic robots trawl for climate data

The research, led by ARC Future Fellow Dr Guy Williams and published in November 2014, provides the most complete picture yet of Antarctic sea ice thickness and structure.

The data was collected by an Autonomous Underwater Vehicle (AUV) deployed during a two-month exploration in late 2012 as part of an international collaboration between polar scientists, including the Antarctic Climate and Ecosystems CRC (ACE CRC). It’s hoped the work will help explain the ‘paradox’ of Antarctic sea ice extent, which has grown slightly during the past 30 years. This is in stark contrast to Arctic sea ice, which has shown a major decline.

Previously, measurements were made via drill holes in the ice and supplemented by visual observations made from icebreakers as they crashed and ploughed through the sea ice zone, said Williams.

In contrast, the AUV gathers information by travelling beneath the ice, producing 3D maps of the underside of the ice based on data captured by a multi-beam sonar instrument. Complex imagery of an area the size of several football fields can be compiled in just six hours.
The manual drill estimates of thickness have never exceeded 5–6 m, but the AUV regularly returned thicknesses over 10 m and up to 16 m.

Autonomous Underwater Vehicles (above) as well as data-gathering seals are revealing surprising global climate effects in the Antarctic.
Autonomous Underwater Vehicles (above) as well as data-gathering seals are revealing surprising global climate effects in the Antarctic.

“This sort of thick ice would simply never be sampled by drilling or observations from ships,” said Williams. “We measured the thickness of 10 double football fields, and found that our traditional method [manual drill lines] would have underestimated the volume by over 20%.”

The researchers can’t yet say that overall Antarctic sea ice thickness is underestimated by this amount. They’ll need to use the AUV over much longer scales – across distances of 1000 km, for example – and directly compare the results with those from traditional methods.

The AUV is one of two new innovative information sources being used by ACE CRC scientists to explore Antarctic sea ice processes and change. They’ve also begun tapping into environmental data gathered in the Southern Ocean by elephant seals. These marine mammals can dive deeper than 1500 m and travel thousands of kilometres in a season.

During the past decade, ecologists and biologists have been equipping them with specialised oceanographic equipment provided by Australia’s Integrated Marine Observing System, to observe where and when they forage.

“These seals had been going to places we could only dream of going with a ship,” said Williams. The first major breakthrough from the seal-gathered data came last year with the confirmation of a new source of Antarctic bottom water, the cold dense water mass created by intense sea ice growth that ultimately influences climate worldwide.

It’s the fourth source to be identified of this influential water mass, and scientists had been looking for it for more than 30 years.
Karen McGhee

www.acecrc.org.au