Tag Archives: University of Melbourne

agricultural research

Transplanting refugee knowhow

Featured image credit: new agricultural research brings community together to improve maize production. Credit: Facebook/Sunraysia Burundian Garden

Far from their native home in eastern Africa, a group of former refugees have brought their traditional farming methods to their new home in Victoria’s north.

Armed with shovels and hoes and some seeds, Mildura’s Twitezimbere Burundian community have planted a crop of maize – a traditional staple food in their home country – which is not only connecting them to the greater Mildura community, but is also connecting researchers with new agricultural methods.

These methods, employed in the northern Victorian township that’s known for its hot temperatures and vast food-growing industry, are helping researchers from the University of Melbourne and the University of Wollongong understand new ways to grow and support crops beyond current techniques. This is especially important in an era of increasingly erratic weather patterns.

agricultural research
Work in the maize field in Mildura. Credit: Facebook/Sunraysia Burundian Garden

Dr Olivia Dun, from the School of Geography at the University of Melbourne, says she and fellow researchers – Professor Lesley Head from the University of Melbourne and Dr Natascha Klocker from the University of Wollongong – have been able to gain important insights into different agricultural methods and crops that could be adopted in Australia.

With funding from an Australian Research Council Discovery Project, the researchers are exploring how people from ethnically diverse backgrounds value nature, how they practise agriculture and how they transfer skills from their home country to the Australian landscape.

“One of the reasons we’re doing this is because when we talk about migrants in relation to the environment, they’re often portrayed as a drain – extra people needing resources. It’s a population debate that frames migrants very negatively,” says Dun.

“So, we wanted to challenge that: these are people with skills, and migrants are not often asked about their knowledge and skills relating to nature.”

Drawing on this untapped resource has also provided the Burundian community multiple benefits: it not only provided the 100 members of the Burundian community with the main ingredient for their traditional dishes, but has also enabled them to connect with the broader Mildura community.

One of the Burundian participants, Joel, said of why he wanted to farm: “I looked and saw [that] this town is a town of farmers. So I thought it will suit me. Because I did not study, I don’t have a degree, I don’t expect to go and work in an office”.

Another agricultural research participant, Joselyne, says she sees Mildura as a “place to grow”.

And grow it has. The tiny maize seeds were planted in September 2016 and by February 2017 had flourished into a soaring crop in which people could get lost. Dun says its success has delighted everyone involved, but especially the local Burundians.

The Republic of Burundi is an east African country that has been blighted by a recent history of colonisation and bloody civil wars. Unlike Australia, the majority of Burundi’s population live in rural areas, so farming and agriculture are significant economic and practical components of life in the land-locked country.

Maize is a staple food for Burundians, along with sweet potato, cassava and wheat.

agricultural research
Preparing the soil for the first day of maize planting. Credit: Olivia Dun and Rachel Kendrigan

“Joel, Joselyne and other members of the Burundian community are extremely accomplished and knowledgeable farmers,” says Dun. “Through their interactions with more established farmers in Mildura, this project provides really exciting opportunities to learn about their farming methods.

“It’s been built on such a strong foundation of mutual respect and a willingness to learn from other cultures, which has been inspiring to see,” adds Klocker.

The community will consume about 10% of the maize fresh, and the rest will either be sold or dried and milled into flour to make ugali – a traditional East African dish. The success of the crop has opened the path for them to think about developing their own small business, selling maize to the Mildura community.

“They feel proud and it’s connected them to the general Australian community in Mildura in a very positive way,” says Dun.

The agricultural research project has been a group effort, made possible thanks to the generous access to one acre of land provided by Sunraysia Produce, support from Sunraysia Local Food Future’s members and Food Next Door program, Sunraysia Mallee Ethnic Communities Council and Mildura Development Corporation.

Dun says keeping this farming tradition alive has been particularly good for the younger kids amongst Mildura’s Burundian community, many of whom have grown up in Australia and have now been able to interact with this crop and how their family farmed in Africa.

Thanks to the success of the agricultural research pilot, more businesses and community groups are seeking to get involved and cultivate more under-utilised land. Vietnamese, Tamil, Nepalese, Hazara and young Anglo-Australian groups across Mildura want to get involved in the next farming scheme, with talks underway to cultivate and establish a community farm on a recently-donated 20-acre parcel of land.

– Alana Schetzer, University of Melbourne

The Food Next Door program needs help kick-starting the community farm and is currently looking for support and financial donations. If you would like to help, please email sunraysialocalfoodfuture@gmail.com

This article was first published by Pursuit. Read the original article here.

AI psychologists

AI psychologists are ready now

The Psychology Network has created one of the world’s first AI psychologists, an artificial ADHD coach called Amy. 

While communication is changing all around us, psychological practice has not fundamentally changed for more than a hundred years. Psychologists deliver services by talking to people in an office environment or out in the field.

While the nature of psychological assessment and therapy may have changed over the years, the formal setting has not: a professional (the psychologist) and a client generally talk one-on-one or in the presence of others.

However, with the arrival of artificial intelligence (AI) in the workplace, the delivery of psychological services is set to change dramatically. Mobile phone apps, for instance, can analyse speech and language to detect indicators of depression and provide instant feedback to both psychologists and clients.

AI psychologists available around the world, 24/7

Although online versions of cognitive-behaviour therapy have been available for more than a decade, what is emerging now are “AI psychologists” – programs that are empowered by vast knowledge bases on mental health and how to solve very human problems.

These programs talk to people in ways that are almost indistinguishable from the ways that human psychologists do. Importantly, they are available anytime, everywhere (on your mobile phone, for example) – and they cost as little as $2/hr. This is psychological expertise on tap, 24/7.

But can psychological therapy work without a shared human experience? Will it be possible for a client to form a bond that is assuring and goes beyond simply using a mobile app? 

I think so. By way of example, a few weeks ago I drove a rental car through a large European city – a place I was visiting for the first time. Given peak hour traffic, narrow streets and a lot of construction, the experience would have been enough to trigger high stress levels. However, I learned to trust the re-assuring voice of my navigation system and the whole experience was as stress-free as I could have hoped for.

Although this is not an example of an AI system, it illustrates the commonplace experience of a machine-generated voice inducing relaxation in a stressful context.

Can humans compete with AI psychologists?

The voices of AI psychologists are now for sale. It is difficult to see how human psychologists can compete with AI psychologists that offer cost-effective coaching and therapy around the clock to thousands of clients at the same time.

By way of example, Tess is a “psychological AI” developed by X2AI, Inc., a corporation based in Delaware. According to X2AI, the program “administers highly personalised psychotherapy, psycho-education, and health-related reminders, on-demand, when and where the mental health professional isn’t”.

Furthermore, the company states that “interaction with Tess is solely through conversation, exclusively via existing communication channels, such as SMS, Facebook Messenger, web browsers, and several other platforms.” And the current patient fee is $US1 per patient/month.

Meet Amy, AI ADHD coach

Amy is an artificial Attention Deficit Hyperactivity Disorder (ADHD) coach developed by the Psychology Network Pty Ltd. Amy has extensive medical and psychological knowledge and the built-in capacity to acquire additional knowledge from mental health experts, which she goes on to apply in her coaching.

Amy’s primary mode of communication is conversation. However, she also provides videos, images and text to educate her users. During conversation, Amy analyses mood problems from the speech and language of its clients. Her knowledge bases are updated frequently to include the latest facts about mental health and ADHD, plus the clinical experience of practicing psychologists.

How does Amy work?

Let’s assume the user experiences challenges such as restlessness and concentration problems. The corresponding symptoms trigger a problem solving process conducted by Amy, the AI system.

The goal is obviously to reduce or eliminate these symptoms but in psychology, it is never that simple. We also want the user to be safe, we want to avoid relapses, and we generally support multiple goals including integration into a family or other social network, and a lifestyle that is healthy and productive.

Amy uses “heuristic search” to determine a path from the starting state (symptoms) to multiple goals states. The path – made up of intermediate states – consists of a selection of psychological methods that have proven useful, such as brain training and relaxation techniques.

All of this is textbook artificial intelligence. The first AI problem solvers were developed more than 50 years ago. What is new is the availability of vast knowledge bases such as SNOMED and YAGO, which can be used as background knowledge. In addition, AI systems can learn how to solve people’s personal problems from human psychologists.

What’s next for psychology?

Psychological practice, as we know it, is a thing of the past. The question is, how can professionals and organisations adjust?

There are still parts of psychological therapy that should not be automated, such as assessing the risk of self-harm. Furthermore, AI systems are hungry for knowledge and the best systems do not only include machine learning but human expertise as well.

There are many opportunities for practicing psychologists to contribute to the development of specialised AI psychologists.

Dr Joachim Diederich

Director, Psychology Network Pty Ltd

Honorary Professorial Fellow, Centre for Mental Health, University of Melbourne

Honorary Professor, School of Information Technology and Electrical Engineering, University of Queensland

Read next: Managing Director of Information Gateways, Simon Maxwell, paints a picture of what future living will look like in the era of autonomous vehicles. 

Spread the word: Help Australia become digital savvy nation! Share this piece on digital disruptors using the social media buttons below.

More Thought Leaders: Click here to go back to the Thought Leadership Series homepage, or start reading the Women in STEM Thought Leadership Series here.

quantum MRI

Quantum MRI machine to enable drug discovery

Featured image above: Visualisation of a quantum MRI machine. Credit: University of Melbourne

Researchers at the University of Melbourne have developed a way to radically miniaturise a Magnetic Resonance Imaging (MRI) machine using atomic-scale quantum computer technology.

Capable of imaging the structure of a single bio-molecule, the new system would overcome significant technological challenges and provide an important new tool for biotechnology and drug discovery.

The work was published in Nature Communications, and was led by Professor Lloyd Hollenberg at the University of Melbourne, working closely with researchers at the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T) to design the quantum molecular microscope.

The team propose the use of atomic-sized quantum bits (qubits) normally associated with the development of quantum computers, but here would be employed as highly sensitive quantum sensors to image the individual atoms in a bio-molecule.

“Determining the structure of bio-molecules such as proteins can often be a barrier to the development of novel drugs,” says Hollenberg, Thomas Baker Chair in Physical Biosciences at the University of Melbourne.

“By using quantum sensing to image individual atoms in a bio-molecule, we hope to overcome several issues in conventional biomolecule imaging, ” Hollenberg says.

State-of-the-art techniques create a crystal of the molecule to be studied and use X-ray diffraction to determine the molecules’ average structure. However, the crystallisation and averaging processes may lead to important information being lost. Also, not all bio-molecules can be crystallised – particularly proteins associated with cell membranes, which are critical in the development of new drugs.

“Our system is specifically designed to use a quantum bit as a nano-MRI machine to image the structure of a single protein molecule in their native hydrated environments,” says Hollenberg.

“As part of our research in quantum computing we have also been working on the nearer-term applications of atomic-based quantum technology investigating the use of a single quantum bit as a highly sensitive magnetic field sensor.”

Atomic qubits can be made to exist in two states at the same time, a disturbingly strange property that not only underpins the power of a quantum computer, but also the sensitivity of qubits as nano-sensors.

“In a conventional MRI machine large magnets set up a field gradient in all three directions to create 3D images; in our system we use the natural magnetic properties of a single atomic qubit,” says University of Melbourne PhD researcher Mr. Viktor Perunicic, who was the lead author on the paper.

“The system would be fabricated on-chip, and by carefully controlling the quantum state of the qubit probe as it interacts with the atoms in the target molecule, we can extract information about the positions of atoms by periodically measuring the qubit probe and thus create an image of the molecule’s structure.” says Peruncic.

“The system could be constructed and tested relatively quickly using diamond-based qubits. However, to capture really high resolution molecular images in the longer term, CQC2T’s silicon-based qubits might have the advantage because they have very long quantum coherence,” says Hollenberg.

“The construction of such a quantum MRI machine for single molecule microscopy could revolutionise how we view biological processes at the molecular level, and could lead to the development of new biotechnology and a range of clinical applications.”

This article on the design of a quantum MRI machine was first published by The Melbourne Newsroom on 12 October 2016. Read the original article here.

eureka prize 2016

Eureka Prize Winners of 2016

Featured image above: Winners of the 2016 UNSW Eureka Prize for Scientific Research, Melissa Little and Minoru Takasato from the Murdoch Childrens Research Institute. Credit: Australian Museum

Regenerating kidneys, smart plastics, artificial memory cells and a citizen science network that tracks falling meteors. These and many other pioneering scientific endeavours have been recognised in the 2016 annual Australian Museum Eureka Prizes, awarded at a gala dinner in Sydney.

Having trouble with a kidney? It may not be long before you can simply grow a new one. This is the ultimate ambition behind the research of the 2016 UNSW Eureka Prize for Scientific Research winners, which was awarded to Melissa Little and Minoru Takasato from the Murdoch Childrens Research Institute.

They have developed a method of growing kidney tissue from stem cells, and their kidney “organoids” develop all the different types of cells that are needed for kidney function. The kidney tissue is currently used in the lab to model kidney disease and to test new drugs, but one day the technique could be developed to regrow replacement kidneys for transplant.

For his work using the latest in 3D printing and materials technology develop a world centre for electromaterials science, Gordon Wallace, from the University of Wollongong, received the 2016 CSIRO Eureka Prize for Leadership in Innovation and Science.

Some of the materials he and his team are developing include structures that are biocompatible, meaning they can be used inside the body without causing an adverse reaction. These structures can be used to promote muscle and nerve cell growth. Other cells include artificial muscles using carbon nanotubes.

The CSIRO’s Lisa Harvey-Smith has been one of the most vocal and energetic proponents of science in the media and the general public, especially amongst Indigenous communities. It is for her work as the face of the Australian Square Kilometre Array Pathfinder (ASKAP) and communicating astronomy to the public that Harvey-Smith was awarded the 2016 Department of Industry, Innovation and Science Eureka Prize for Promoting Understanding of Australian Science.

Have you ever seen a meteor streak across the sky and wondered where it landed? Phil Bland, from Curtin University, certainly hopes you have. He and his team set up the Desert Fireball Network, which allows members of the public to track meteors as they fall, helping them to identify where they land, and where they came from.

For this, Bland and his team were awarded the 2016 Department of Industry, Innovation and Science Eureka Prize for Innovation in Citizen Science.

But not all the awards went to seasoned researchers. Some were reserved for the next generation of scientific pioneers.

Hayden Ingle, a Grade 6 student from Banksmeadow Primary School in Botany, received the 2016 Sleek Geeks Science Eureka Prize for Primary Schools for his video production, The Bluebottle and the Glaucus. It tells the remarkable tale of a little known sea predator, the tiny sea lizard, or glacus atlantica, and its fascinating relationship with the bluebottle.

Speaking of predators, a video by Claire Galvin and Anna Hardy, Year 10 students at St Monica’s College, Cairns, won the 2016 Sleek Geeks Science Eureka Prize for Secondary Schools for exploring the eating habits of the Barn Owl.

They examined “owl pellets”, which contain the indigestible components of the owl’s last meal, and used them to identify its prey.

Other winners of the 2016 Eureka Prize

Ewa Goldys from Macquarie University and the ARC Centre of Excellence for Nanoscale BioPhotonics and Martin Gosnell from Quantitative Pty Ltd have been awarded the ANSTO Eureka Prize for Innovative Use of Technology for their development of hyperspectral imaging technology, which enables the colour of cells and tissues to be used as a non-invasive medical diagnostic tool.

For his discovery and development of novel treatments for serious brain disorders, Michael Bowen, from the University of Sydney, is the winner of the Macquarie University Eureka prize for Outstanding Early Career Researcher. His research has established oxytocin and novel molecules that target the brain’s oxytocin system as prime candidates to fill the void left by the lack of effective treatments for alcohol-use disorders and social disorders.

For developing a new generation of armoured vehicles to keep Australian soldiers safe in war zones, Thales Australia and Mark Brennan have won the 2016 Defence Science and Technology Eureka Prize for Outstanding Science in Safeguarding Australia.

Davidson Patricia Davidson is Dean of the Johns Hopkins University School of Nursing in Maryland, and has mentored more than 35 doctoral and postdoctoral researchers, working tirelessly and with passion to build the capacity of early career researchers, an achievement that has won her the 2016 University of Technology Sydney Eureka Prize for Outstanding Mentor of Young Researchers.

For taking basic Australian research discoveries and developing them into a new cancer therapy that was approved by the US Food and Drug Administration in April this year, David Huang and his team from the Walter and Eliza Hall Institute of Medical Research has win the 2016 Johnson & Johnson Eureka Prize for Innovation in Medical Research. The drug, venetoclax, was approved for a high-risk sub-group of patients with Chronic Lymphocytic Leukemia and is now marketed in the US.

For creating a three part documentary that portrayed both the good and the evil of uranium in a series seen around the world, Twisting the Dragon’s Tail, Sonya Pemberton, Wain Fimeri and Derek Muller, won the 2016 Department of Industry, Innovation and Science Eureka Prize for Science Journalism.

Sharath Sriram, Deputy Director of the A$30 million Micro Nano Research Facility at RMIT University, has won the 2016 3M Eureka Prize for Emerging Leader in Science for his extraordinary career – during which he and his team have developed the world’s first artificial memory cell that mimics the way the brain stores long term memory.

For bringing together a team with skills ranging from mathematical modelling to cell biology and biochemistry, Leann Tilley and her team from the University of Melbourne have won the 2016 Australian Infectious Diseases Research Centre Eureka Prize for Infectious Disease Research. They have uncovered an important life saving mechanism by which the malaria parasite has developed resistance to what has been previously a widely used and successful malarial treatment.

For recruiting an international team of scientists to measure trace elements in the oceans from 3.5 billion years ago to the present day to understand the events that led to the evolution of life and extinction of life in the oceans, Ross Large from the University of Tasmania and researchers from as far as Russia and the US have won the 2016 Eureka Prize for Excellence in Interdisciplinary Research.

For conducting the world’s first survey of plastic pollutants which has given us a confronting snapshot of the impacts on marine wildlife of the 8.4 million tones of plastic that enters the oceans each year, Denise Hardesty, Chris Wilcox, Tonya Van Der Velde, TJ Lawson, Matt Landell and David Milton from CSIRO in Tasmania and Queensland have won the 2016 NSW Office of Environment and Heritage Eureka Prize for Environmental Research.

The Functional Annotation of the Mammalian Genome (FANTOM5) project produced a map that is being used to interpret genetic diseases and to engineer new cells for therapeutic use. The team led by Alistair Forrest from the Harry Perkins Institute of Medical Research has won the 2016 Scopus Eureka Excellence in International Scientific Collaboration Prize.

– Tim Dean

This article on the Eureka Prize 2016 winners was first published by The Conversation on 31 August 2016. Read the original article here.

engineering music video

Engineering music video inspires girls

Featured video above: NERVO’s engineering music video aims to get girls switched onto careers in engineering. 

Eight top universities – led by the University of New South Wales – have launched a song and music video by Australia’s twin-sister DJ duo NERVO to highlight engineering as an attractive career for young women.

NERVO, made up of 29-year-old singer-songwriters and sound engineers Miriam Nervo and Olivia Nervo, launched the video clip for People Grinnin’ worldwide on Friday 15 July.

In the futuristic video clip, a group of female engineers create android versions of NERVO in a high-tech lab, using glass touchscreens and a range of other technologies that rely on engineering, highlighting how it is embedded in every facet of modern life.

The song and video clip are part of Made By Me, a national collaboration between UNSW, the University of Wollongong, the University of Western Australia, the University of Queensland, Monash University, the University of Melbourne, the Australian National University and the University of Adelaide together with Engineers Australia, which launched on the same day across the country.

It aims to challenge stereotypes and shows how engineering is relevant to many aspects of our lives, in an effort to to change the way young people, particularly girls, see engineering. Although a rewarding and varied discipline, it has for decades suffered gender disparity and chronic skills shortage.

NERVO, the Melbourne-born electronic dance music duo, pack dancefloors from Ibiza to India and, according to Forbes,  are one of the world’s highest-earning acts in the male-dominated genre. They said the Made by Me project immediately appealed to them.

“When we did engineering, we were the only girls in the class. So when we were approached to get behind this project it just made sense,” they said.

“We loved the chance to show the world that there is engineering in every aspect of our lives,” they said. “We’re sound engineers, but our whole show is only made possible through expert engineering:  the makeup we wear, the lights and the stage we perform on.”

“Engineering makes it all possible, including the music that we make.”

Alexandra Bannigan, UNSW Women in Engineering Manager and Made By Me spokesperson, said the project highlights the varied careers of engineers, and the ways in which engineers can make a real difference in the world. 

“When people think engineering, they often picture construction sites and hard hats, and that perception puts a lot of people off,” she said. “Engineering is more than  that, and this campaign shows how engineering is actually a really diverse and creative career option that offers strong employment prospects in an otherwise tough job market.”

She noted that the partner universities, which often compete for the best students, see the issue as important enough to work together.

“We normally compete for students with rival universities, but this is such an important issue that we’re working together to break down those perceptions,” she said.

Made By Me includes online advertising across desktop and mobiles, a strong social media push, a website telling engineering stories behind the video, links to career sites, as well as the song and video, to be released by Sony globally on the same day. Developed by advertising agency Whybin/TBWA, the campaign endeavours to change the way young people, particularly girls, see engineering.

“We needed to find a way to meet teenagers on home turf and surprise them with an insight into engineering that would open their minds to its possibilities,” said Mark Hoffman, UNSW’s Dean of Engineering. “This is what led to the idea of producing an interactive music video, sprinkled with gems of information to pique the audience’s interest in engineering.”

UNSW has recently accelerated efforts to attract more women into engineering, more than tripling attendance at its annual Women in Engineering Camp, in which 90 bright young women in Years 11 and 12 came to UNSW from around Australia for a week this year to explore engineering as a career and visiting major companies like Google, Resmed and Sydney Water. It has also tripled the number of Women in Engineering scholarships to 15, valued at more than $150,000 annually.

Hoffman, who became Dean of Engineering in 2015, has set a goal to raise female representation among students, staff and researchers to 30% by 2020. Currently, 23% of UNSW engineering students are female (versus the Australian average of 17%), which is up from 21% in 2015. In industry, only about 13% of engineers are female, a ratio that has been growing slowly for decades.

“Engineering has one of the highest starting salaries, and the average starting salary for engineering graduates has been actually higher for women than for men,” said Hoffman. “Name another profession where that’s happening.”

Australia is frantically short of engineers: for more than a decade, the country has annually imported more than double the number who graduate from Australian universities.

Some 18,000 engineering positions need to be filled annually, and almost 6,000 come from engineering students who graduate from universities in Australia, of whom the largest proportion come from UNSW in Sydney, which has by far the country’s biggest engineering faculty. The other 12,000 engineers arrive in Australia to take up jobs – 25% on temporary work visas to alleviate chronic job shortages.

“Demand from industry has completely outstripped supply, and that demand doubled in the past decade,” said Hoffman. “In a knowledge driven economy, the best innovation comes from diverse teams who bring together different perspectives. This isn’t just about plugging the chronic skills gap – it’s also a social good to bring diversity to our technical workforce, which will help stimulate more innovation. We can’t win at the innovation game if half of our potential engineers are not taking part in the race.”

UNSW has also created a new national award, the Ada Lovelace Medal for an Outstanding Woman Engineer, to highlight the significant contributions to Australia made by female engineers.

This information was first shared by the University of New South Wales on 14 July 2016. Read the original article here.

Data driven communities

Featured image above: the AURIN Map implements a geospatial map publicly available online. Credit: Dr Serryn Eagelson, AURIN

Ildefons Cerdà coined the term ‘urbanisation’ during his Eixample (‘expansion’) plan for Barcelona, which almost quadrupled the size of the city in the mid-19th century.

Cerdà’s revolutionary scientific approach calculated the air and light inhabitants needed, occupations of the population and the services they might need. His legacy remains, with Barcelona’s characteristic long wide avenues arranged in a grid pattern around octagonal blocks offering the inhabitants a city in which they can live a longer and healthier life.

Since Cerdà’s time, urban areas have come a long way in how they are planned and improved, but even today disparities are rife in terms of how ‘liveable’ different areas are. “Liveability is something that I’ve been working on most recently,” says Dr Serryn Eagelson, Data, Business and Applications Manager for the Australian Urban Research Infrastructure Network (AURIN).

Eagelson describes her work in finding new datasets as a bit like being a gold prospector. “It encompasses walkability, obesity, clean air, clean water – everything that relates to what you need in order to live well.”

In collaboration with more than 60 institutions and data providers, the $24 million AURIN initiative, funded by the Australian Government and led by The University of Melbourne, tackles liveability and urbanisation using a robust research data approach, providing easy access to over 2,000 datasets organised by geographic areas. AURIN highlights the current state of Australia’s cities and towns and offers the data needed to improve them.

“We have provided AURIN Map to give communities the opportunity to have a look at research output,” says Eagelson. Normally hidden away from public eyes, the information in the AURIN Map can be viewed over the internet and gives communities an unprecedented opportunity to visualise and compare the datasets on urban infrastructure they need to lobby councils and government for improvements in their area.

Recently, AURIN has teamed up with PwC Australia – the largest professional services company in the world – to pool skills, tools and data. “We’re also working with PwC in developing new products,” adds Eagelson. “It’s quite complicated but PwC’s knowledge is giving us new insights into how data can be used for economic policy.”

The Australian National Data Service (ANDS) also has strong links with AURIN, having undertaken a number of joint projects on topics such as how ‘walkable’ neighbourhoods are, which can then be used to plan things like public transport accessibility (even down to where train station entrances and exits should be located); urban employment clusters, which can aid decision-making on the location of businesses; and disaster management, where the collaborators developed a proof-of-concept intelligent Disaster Decision Support System (iDDSS) to provide critical visual information during natural disasters like floods or bushfires.

“I’m probably most excited by a project releasing the National Health Service Directory – a very rich dataset that we’ve never had access to before,” says Eagelson. “It even includes the languages spoken by people who run those services, and that data’s now being used to look at migrants to Australia, where they move from suburb to suburb, and how their special health needs can be best catered for – so this information has a big public health benefit.”

This article was first published by the Australian National Data Service in May 2016. Read the original article here.

Predicting property

Predicting property yields

Featured image above: property rental ratios in Melbourne ranging from 7.5% (Blue) to 1.5% (Red) annually. Credit: University of Melbourne

Researchers from the University of Melbourne have created a system to model and predict house values and rental rates at the individual property level.  The comparison of these two values offers insight into rental yields in the market; an import metric that can be used by buyers, sellers, investors and renters to help make informed choices.

Dr Gideon Aschwanden and Dr Andy Krause from the Melbourne School of Design in the Faculty of Architecture, Building and Planning say that rental yields are a critical driver of rental and housing costs and acts as a key indicator for property bubbles.

“In this volatile Melbourne property market, buyers want to ensure the safety of their investment. Our recent analysis of property sales and rental returns will better inform investors with location information, helping them to invest their money more securely,” says Aschwanden.

According to the researchers, rental yields of the property market as a whole need to be properly evaluated as they may be a leading indicator of bubble creation. By understanding changes in yields, safety measures can be enacted that may help prevent or dampen a sudden collapse in the market. Buyer’s decisions are driven by costs.

With first time homeowners renting out their property to pay off the mortgage to the point where they can afford it they need to estimate their rental income and property yield. Using a unique dataset of home sales and rentals from the Australian Property Monitors, the researchers investigated the spatial and temporal changes in residential rental yields across the Melbourne metropolitan region from June 2010 to June 2015.

Using data supplied by Domain, they matched properties that have been both sold and rented during the study period. After adjusting for market changes, these two observations were compared, to develop a property-specific estimate of rental yield.

“Starting with the entire metropolitan region, we then calculated yields at the level of local government area (LGA), statistical local area level 1 (SLA1), post code, suburb, and at a property-specific level,” says Aschwanden.

“Looking at the entire metropolitan region, our rental yield calculations allowed us a direct look at variations by neighbourhood, street and even specific building, in the case of apartments,” he says.

The detailed analysis showed that apartments offer higher yields than detached houses. This difference has widened over time, with yields from houses falling off >0.5% from June 2013 to June 2015 while yields on apartments have held somewhat steady since 2013.

“Looking at influencing factors, location shows the highest impact. Within the Metropolitan area of Melbourne, a 6% spread of rental yields ranging between 1.5% and 7.5% is visible. This is much higher than the decline of 0.5% observed over the last three years or the impact of distance to train stations of 0.5%.”

Evidence showing variation within postcodes will help investors to make a much more refined evaluation of the decision to purchase a property. Changes within and between localities may have a more significant impact on returns than changes over time.

This article was first published by The Melbourne Newsroom on 18 May 2016. Read the original article here.

discovery

Thrill of discovery

The thrill of discovery is what biochemist Marilyn Anderson relishes in her work. “It’s a feeling you can’t even imagine: when you’re the first person to solve a problem,” she says.

Anderson is a Professor of Biochemistry at the La Trobe Institute for Molecular Science (LIMS) and the Chief Science Officer of Hexima, a biotechnology company embedded in LIMS. Anderson co-founded Hexima in 1998 following her discovery of naturally occurring insecticidal and antifungal molecules in the reproductive parts of plants.

The team at Hexima are exploiting these molecules to develop genetically modified crops that are protected from insect predation and fungal infections – a game changer for agriculture. Research in this area is ongoing, as insects are developing resistance to the commonly used BT toxin, an insecticide produced by the bacterium Bacillus thuringiensis, and new insecticidal genes are needed. “It’s a huge market,” says Anderson.

“We will not be able to feed and clothe humanity if we don’t have insect and fungal-resistant plants.”

Anderson did a BSc (Hons) at the University of Melbourne and then completed her PhD in biochemistry at La Trobe University. Her enthusiasm for this field is clear: “I’m still knocked over by just how amazing biology is, and how things have evolved to work”.

After graduating, Anderson was drawn to “the revolution of the time – the beginning of gene cloning and molecular biology”. She moved to the USA and worked on diabetes at the University of Miami before transferring to Cold Spring Harbor to conduct cancer research. “We were paving the way. It was extremely exciting because while I was at Cold Spring Harbor the first oncogenes, or cancer-causing genes, were discovered,” she says.

Expertise in molecular biology was internationally sought after at the time and was the crux of much interdisciplinary research. In 1982 Anderson was offered a job with Laureate Professor Adrienne Clarke AC at the Plant Cell Biology Research Centre at the University of Melbourne. “That was a big switch for me,” says Anderson. “I’d been working on cancer and this was a botany school.” Together, Anderson and Clarke were able to discover the gene that prevents self-pollination, or inbreeding, in flowering plants.

Now a leader in the scientific community, Anderson is not only a director at Hexima; she is also on the La Trobe University Council and was inducted into the 2014 Victorian Honour Roll for Women for her scientific achievements.

Gender equality and supporting women in science are two things Anderson is passionate about. “There’s a lot of work to be done just to give women equal opportunity,” she says. “There are many talented female scientists here at Hexima, and I enjoy mentoring women and helping them through the early stages of their career.”

Anderson conducts workshops with secondary students that focus on women in science, and she’s part of Supporting Women in Science (SWIS), a new association at La Trobe that gives guidance to female postgraduate researchers in STEM.

“This is a proactive program to direct universities to pay more attention to gender diversity.”


Anderson will be speaking at Women in Science, an event hosted by La Trobe University for Melbourne Knowledge Week in May 2016. The panel discussion will centre on the underrepresentation of women in STEM careers. The MC will be science journalist Robyn Williams. Panel speakers will also include NHMRC Biomedical Fellow of the Peter MacCullum Cancer Centre, Misty Jenkins; Head of La Trobe’s School of Engineering and Mathematical Sciences, Wenny Rahayu; and nanotechnology research assistant and nominee for Women’s Weekly Women of the Future Award in 2015, Elana Montagner. For more information and to register for the event, head to www.latrobe.edu.au/womeninscience.


Cherese Sonkkila

CO₂ cuts nutrition

CO₂ cuts nutrition

Climate change is affecting the Earth, through more frequent and intense weather events, such as heatwaves and rising sea levels, and is predicted to do so for generations to come. Changes brought on by anthropogenic climate change, from activities such as the burning of fossil fuels and deforestation, are impacting natural ecosystems on land and at sea, and across all human settlements.

Increased atmospheric carbon dioxide (CO₂) levels – which have jumped by a third since the Industrial Revolution – will also have an effect on agriculture and the staple plant foods we consume and export, such as wheat.

Stressors on agribusiness, such as prolonged droughts and the spread of new pests and diseases, are exacerbated by climate change and need to be managed to ensure the long-term sustainability of Australia’s food production.

Researchers at the Primary Industries Climate Challenges Centre (PICCC), a collaboration between the University of Melbourne and the Department of Economic Development, Jobs, Transport and Resources in Victoria, are investigating the effects of increased concentrations of CO₂ on grain yield and quality to reveal how a more carbon-enriched atmosphere will affect Australia’s future food security.

CO₂ cuts nutrition
An aerial view of the Australian Grains Free Air CO₂ Enrichment (AGFACE) project, where researchers are investigating the effects of increased concentrations of carbon dioxide on grain yield and quality.

Increasing concentrations of CO₂ in the atmosphere significantly increase water efficiency in plants and stimulate plant growth, a process known as the “fertilisation effect”. This leads to more biomass and a higher crop yield; however, elevated carbon dioxide (eCO₂) could decrease the nutritional content of food.

“Understanding the mechanisms and responses of crops to eCO₂ allows us to focus crop breeding research on the best traits to take advantage of the eCO₂ effect,” says Dr Glenn Fitzgerald, a senior research scientist at the Department of Economic Development, Jobs, Transport and Resources.

According to Fitzgerald, the research being carried out by PICCC, referred to as Australian Grains Free Air CO₂ Enrichment (AGFACE), is also being done in a drier environment than anywhere previously studied.

“The experiments are what we refer to as ‘fully replicated’ – repeated four times and statistically verified for accuracy and precision,” says Fitzgerald. “This allows us to compare our current growing conditions of 400 parts per million (ppm) CO₂ with eCO₂ conditions of 550 ppm – the atmospheric CO₂ concentration level anticipated for 2050.”

The experiments involve injecting CO₂ into the atmosphere around plants via a series of horizontal rings that are raised as the crops grow, and the process is computer-controlled to maintain a CO₂ concentration level of 550 ppm.

CO₂ cuts nutrition
Horizontal rings injecting carbon dioxide into the atmosphere as part of the AGFACE project. Credit: AGFACE

“We’re observing around a 25–30% increase in yields under eCO₂ conditions for wheat, field peas, canola and lentils in Australia,” says Fitzgerald.


Pests and disease

While higher CO₂ levels boost crop yields, there is also a link between eCO₂ and an increase in viruses that affect crop growth.

Scientists at the Department of Economic Development, Jobs, Transport and Resources have been researching the impact of elevated CO₂ levels on plant vector-borne diseases, and they have observed an increase of 30% in the severity of the Barley Yellow Dwarf Virus (BYDV).

CO₂ cuts nutrition
Higher CO₂ levels are linked with an increase in the severity of Barley Yellow Dwarf Virus.

Spread by aphids, BYDV is a common plant virus that affects wheat, barley and oats, and causes yield losses of up to 50%.

“It’s a really underexplored area,” says Dr Jo Luck, director of research, education and training at the Plant Biosecurity Cooperative Research Centre. “We know quite a lot about the effects of drought and increasing temperatures on crops, but we don’t know much about how the increase in temperature and eCO₂ will affect pests and diseases.

“There is a tension between higher yields from eCO₂ and the impacts on growth from pests and diseases. It’s important we consider this in research when we’re looking at food security.”


This increased yield is due to more efficient photosynthesis and because eCO₂ improves the plant’s water-use efficiency.

With atmospheric CO₂ levels rising, less water will be required to produce the same amount of grain. Fitzgerald estimates about a 30% increase in water efficiency for crops grown under eCO₂ conditions.

But nutritional content suffers. “In terms of grain quality, we see a decrease in protein concentration in cereal grains,” says Fitzgerald. The reduction is due to a decrease in the level of nitrogen (N2) in the grain, which occurs because the plant is less efficient at drawing N2 from the soil.

The same reduction in protein concentration is not observed in legumes, however, because of the action of rhizobia – soil bacteria in the roots of legumes that fix N2 and provide an alternative mechanism for making N2 available.

“We are seeing a 1–14% decrease in grain-protein concentration [for eCO₂ levels] and a decrease in bread quality,” says Fitzgerald.

“This is due to the reduction in protein and because changes in the protein composition affect qualities such as elasticity and loaf volume. There is also a decrease of 5–10% in micronutrients such as iron and zinc.”

This micronutrient deficiency, referred to as “hidden hunger”, is a major health concern, particularly in developing countries, according to the International Food Research Policy Institute’s 2014 Global Hunger Index: The challenge of hidden hunger.

There could also be health implications for Australians. As the protein content of grains diminishes, carbohydrate levels increase, leading to food with higher caloric content and less nutritional value, potentially exacerbating the current obesity epidemic.

The corollary from the work being undertaken by Fitzgerald is that in a future CO₂-enriched world, there will be more food but it will be less nutritious. “We see an increase in crop growth on one hand, but a reduction in crop quality on the other,” says Fitzgerald.

Fitzgerald says more research into nitrogen-uptake mechanisms in plants is required in order to develop crops that, when grown in eCO₂ environments, can capitalise on increased plant growth while maintaining N2, and protein, levels.

For now, though, while an eCO₂ atmosphere may be good for plants, it might not be so good for us.

– Carl Williams

www.piccc.org.au

www.pbcrc.com.au

Algorithms making social decisions

Algorithms making social decisions

In this Up Close podcast episode, informatics researcher Professor Paul Dourish explains how algorithms, as more than mere technical objects, guide our social lives and organisation, and are themselves evolving products of human social actions.

“Lurking within algorithms, unknown to people and certainly not by design, can be all sorts of unconscious biases and discriminations that don’t necessarily reflect what we as a society want.”

The podcast is presented by Dr Andi Horvath. Listen to the episode below, or read on for the full podcast transcript.

Professor Paul Dourish

Dourish is a Professor of Informatics in the Donald Bren School of Information and Computer Sciences at University of California, Irvine, with courtesy appointments in Computer Science and Anthropology.

His research focuses primarily on understanding information technology as a site of social and cultural production; his work combines topics in human-computer interaction, social informatics, and science and technology studies.

He is the author, with Genevieve Bell, of Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (MIT Press, 2011), which examines the social and cultural aspects of the ubiquitous computing research program. He is a Fellow of the Association for Computing Machinery (ACM), a member of the Special Interest Group for Computer-Human Interaction  (SIGCHI) Academy, and a recipient of the American Medical Informatics Association (AMIA) Diana Forsythe Award and the Computer Supported Co-operative Work (CSCW) Lasting Impact Award.

Podcast Transcript

VOICEOVER: This is Up Close, the research talk show from the University of Melbourne, Australia.

HORVATH: I’m Dr Andi Horvath. Thanks for joining us. Today we bring you Up Close to one of the very things that shapes our modern lives. No, not the technology as such, but what works in the background to drive it: the algorithm, the formalised set of rules governing how our technology is meant to behave.

As we’ll hear, algorithms both enable us to use technology and to be used by it. Algorithms are designed by humans and just like the underpinnings of other technologies, say like drugs, we don’t always know exactly how they work. They serve a function but they can have side-effects and unexpectedly interact with other things with curious or disastrous results.

Today, machine learning means that algorithms are interacting with, or developing other algorithms, without human input. So how is it that they can have a life of their own? To take us Up Close to the elusive world of algorithms is our guest, Paul Dourish, a Professor of Informatics in the Donald Bren School of Information and Computer Science at UC Irvine. Paul has written extensively on the intersection of computer science and social science and is in Melbourne as a visiting Miegunyah Fellow. Hello, and welcome to Up Close.

DOURISH: Morning, it’s great to be here.

HORVATH: Paul, let’s start with the term algorithm. We hear it regularly in the media and it’s even in product marketing, but I suspect few of us really know what the word refers to. So let’s get this definition out of the way: what is an algorithm?

DOURISH: Well, it is a pretty abstract concept so it’s not surprising if people aren’t terrible familiar with it. An algorithm is really just a way of going about doing something, a set of instructions or a series of steps you’ll go through in order to produce some kind of computational result. So for instance, you know, when we were at school we all learned how to do long multiplication and the way we teach kids to do multiplication, well that’s an algorithm. It’s a series of steps that you can go through and you can guarantee that you’re going to get a certain kind of result. So algorithms then get employed in computational systems, in computer systems to produce the functions that we want.

HORVATH: Where do we find algorithms? If I thought about algorithm-spotting say on the way to work, where do we actually encounter them?

DOURISH: Well, if you were to take the train, for instance, algorithms might be controlling the rate at which trains arrive and depart from stations to try to manage a stable flow of passengers through a transit system. If you were to Google something in the morning to look up something that you were going to do or perhaps to prepare for this interview, well an algorithm not only found the information for you on the internet, but it was also used to sort those search results and decide which one was the one to present to you at the top of the list and which one was perhaps going to come further down. So algorithms are things that lie behind the operation of computer systems; sometimes those are computer systems we are using directly and sometimes they are computer systems that are used to produce the effects that we all see in the world like for instance, the flow of traffic.

HORVATH: So Paul, we use algorithms every day in everything, whether it’s work, rest, play, but are we also being used by algorithms?

DOURISH: Well, I guess there’s a couple of ways we could think about that. One is that we all produce data; the things that we do produce data that get used by algorithms. If we want to think about an algorithm for instance that controls the traffic lights and to manage the flow of people through the streets of Melbourne, well, the flow of people through the streets of Melbourne is also the data upon which that algorithm is working. So we’re being used by algorithms in the sense perhaps that we’re all producing the data that the algorithm needs to get its job done.

But I think there’s also a number of ways in which we might start to think that we get enrolled in the processes and effects of algorithms, so if corporations and government agencies and other sorts of people are making use of algorithms to produce effects for us, then our lives are certainly influenced by those algorithms and by the kinds of ways that they structure our interaction with the digital world.

HORVATH: So algorithms that are responsible for say datasets or computational use, the people who create them are quite important. Who actually creates these algorithms? Are they created by governments or commerce?

DOURISH: They can be produced in all sorts of different kinds of places and if you were in Silicon Valley and you were the sort of person who had a brand new algorithm, you might also be the sort of person who would have a brand new start-up. By and large, algorithms are produced by computer scientists, mathematicians and engineers.

Many algorithms are fundamentally mathematical at their heart and one of the ways in which computer scientists are interested in algorithms is to be able to do mathematical analysis on the kinds of things that computers might do and the sort of performance that they might have. But computer scientists are also generally in the business of figuring out ways to do things and that means basically producing algorithms.

HORVATH: One of the reasons we hear algorithms a lot these days is because they’ve caused problems, or at least confusion. Can you give us some tangible examples of where that’s happened?

DOURISH: Sure. Well, I think we see a whole lot of those and they turn up in the paper from time to time, and some are kind of like trivial and amusing and some have serious consequences. From the trivial side and the amusing side we see algorithms that engage in classification, which is an important area for algorithmic processing, and classifications that go wrong, places where an algorithm decides that because you bought one product you are interested in a particular class of things and it starts suggesting all these things to you.

I had a case with my television once where it had decided because my partner was recording Rocky and Bullwinkle, which is an old 1970s cartoon series [for just] America featuring a moose and a squirrel, that I must be interested in a hunting program so it started recording hunting shows for me. So although they’re silly, they begin to show the way that algorithms have a role.

The more serious ones though are ones that begin to affect commerce and political life. A famous case in 2010 was what was called the flash crash, a situation in which the US stock market lost then suddenly regained a huge amount of value, about 10 per cent of the value of their system, all within half an hour, and nobody really knew why it happened. It turned out instead of human beings buying and trading shares, it was actually algorithms buying and trading shares. The two algorithms were sort of locked in a loop, one trying to offer them for sale and one trying to buy them up, and suddenly it spiralled out of control. So these algorithms, because they sort of play a role in so many different systems and appear in so many different places, can have these big impacts and in even those small trivial cases or ones that begin to alert us or tune us to where the algorithms might be.

HORVATH: Tell us about privacy issues; that must be something that algorithms don’t necessarily take seriously.

DOURISH: Well, of course the algorithm works with whatever data it has to hand, and data systems may be more or less anonymised, they may be more or less private. One of the interesting problems perhaps is that the algorithm begins to reveal things that you didn’t necessarily know that your data might reveal.

For example, I might be very careful about being tracked by my phone. You know, I choose to turn off those things that say for instance where my home is, but if an algorithm can detect that I tend to be always at the same place at 11 o’clock at night or my phone is always at the same place at 11 o’clock at night and that’s where I start my commute to work in the morning, then those patterns begin to build up and there can be privacy concerns there. So algorithms begin to identify patterns in data and we don’t necessarily know what those patterns are, nor are we provided necessarily with the opportunity to audit, control, review or erase data. So that’s where the privacy aspects begin to become significant.

HORVATH: Is there an upsurge about societal concerns about algorithms? Really, I’m asking you the question, why should we care about algorithms? Do we need to take these more seriously?

DOURISH: I think people are beginning to pay attention to the ways in which there can be potentially deleterious social effects. I don’t want to sit here simply saying that algorithms are dangerous and we need to be careful, but on the other hand there is this fundamental question about knowing what it is the algorithm is doing and being conscious of its fairness.

On the trivial side, there is an issue that arose around the algorithm in digital cameras to detect faces, when you want to focus on the face. It turned out after a while that the algorithms in certain phones looked predominantly for white faces but were actually very bad at detecting black faces. Now, those kinds of bias aren’t very visible to us, as the camera just doesn’t work. Those are perhaps where as a society we need to start thinking about what is being done for us by algorithms, because lurking within those algorithms, unknown to people and certainly not by design, can be all sorts of unconscious biases and discriminations that don’t necessarily reflect what we as a society want.

HORVATH: Are we being replaced by algorithms? Is this something that’s threatening jobs as we know it?

DOURISH: Well, I certainly see plenty of cases where people are concerned about that and talk about it, and there’s been some in the press in the last couple of years that talk for instance about algorithms taking over HR jobs in human resources, interviewing people for jobs or matching people for jobs. By and large though, lots of these algorithms are being used to supplement and augment what people are doing. I don’t think we’ve seen really large-scale cases so far of people being replaced by algorithms, although it’s certainly a threat that employers and others can hold over people.

HORVATH: Sure. Draw the connection for us between algorithms and this emerging concept of big data.

DOURISH: Well, you can’t really talk about one without the other; they go together so seamlessly. Actually, one of the reasons that I’ve been talking about algorithms lately is precisely because there’s so much talk about big data just now. The algorithms and the data go together. The data provides the raw material that the algorithm processes and the algorithm is generally what makes sense of that data.

We talk about big data not least in terms of this idea of being able to capture and collect, to get information from all sorts of sensors, from all sorts of things about the world, but it’s the algorithm that then comes in and makes sense of that data, that identifies patterns and things that we think are useful or interesting or important. I might have a large collection of data that tells me what everybody in Victoria has purchased in the supermarket for the last month, but it’s an algorithm that’s going to be able to identify within that dataset well, here are dual income families in Geelong or the sort of person who’s interested in some particular kind of product and amenable to a particular kind of marketing. So they always go together; you never have one without the other.

HORVATH: But surely there are problems in interpretation and things get lost in translation.

DOURISH: That’s a really interesting part of the whole equation here. It’s generally human beings have to do the interpretation; the algorithm can identify a cluster. It can say, look, these people are all like each other but it tends to be a human being who comes along and says now, what is it that makes those people like each other? Oh, it’s because they are dual income families in Geelong. There’s always a human in the loop here. Actually, the problem that we occasionally encounter, and it’s like that problem of inappropriate classification that I mentioned earlier, the problem is that often we think we know what the clusters are that an algorithm has identified until an example comes along that shows oh, that wasn’t what it was at all. So the indeterminacy of both the data processing part and the human interpretation is where a lot of the slippage can occur.

HORVATH: I’m Andi Horvath and you’re listening to Up Close. In this episode, we’re talking about the nature and consequences of algorithms with informatics expert Paul Dourish. Paul, given that algorithms are a formalised set of instructions, can’t they simply be written in English or any other human language?

DOURISH: Well, algorithms certainly are often written in English. There’s all sorts of ways in which we write them down. Sometimes they are mathematical equations that live on a whiteboard. They often take the form of what computer scientists call pseudo-code, which looks like computer code but isn’t actually executable by a computer, and sometimes they are in plain English. I used the example earlier of the algorithm that we teach to children for how to do multiplication; well, that was explained to them in plain English. So they can take all sorts of different forms. Really, that’s some of the difficulty about the notion of algorithm is this very abstract idea and it can be realised in many different kinds of ways.

HORVATH: So the difference between algorithms and codes and pseudo-codes are different forms of abstraction?

DOURISH: In a way, yes. Computer code is the stuff that we write that actually makes computers do things, and the algorithm is a rough description of what that code might be like. Real programs are written in specific programming languages. You might have heard of C++ or Java or Python, these are programming languages that people use to produce running computer systems. The pseudo-code is a way of expressing the algorithm that’s independent of any particular programming language. So if I have a great algorithm, an idea for how to produce a result or sort a list or something, I can express it in the pseudo-code and then different programmers who are working in different programming languages can translate the algorithm into the language that they need to use to get their particular work done.

HORVATH: Right. Now, I’ve heard one of the central issues is that we can’t really read the algorithm once it’s gone into code. It’s like we can’t un-cook the cake or reverse engineer it. Why is that so hard?

DOURISH: Well, we certainly can in some cases; it’s not a hard and fast rule. In fact, most computer science departments, like the one here at Melbourne, will teach people how to write code so that you can see what’s going on. But there are a couple of complications that certainly can make it more difficult.

The first is that the structure of computer systems requires that you do more things than simply what the algorithm describes. An algorithm is an idealised version of what you might do, but in practice I might have to do all sorts of other things as well, like I’m managing the memory of the computer and I’m making sure the network hasn’t died and all these things. My program has lots of other things in it that aren’t just the algorithm but are more complicated.

Another complication is that sometimes people write code in such a way that it hides the algorithm for trade secret purposes. I don’t want to have somebody else pick up on and get my proprietary algorithm or the secret source for my business or program, and so I write the software in a deliberately somewhat obscure way.

Then the other problem is that sometimes algorithms are distributed in the world, they don’t all happen in one place. I think about the algorithms for instance that control how data flows across the internet and tries to make sure there isn’t congestion and too much delay in different parts of the network. Well, those algorithms don’t really happen in one place, they happen between different computers. Little bits of it are on one computer and little bits of it are on the other and they act together in coordination to produce the effect that we desire, so it can be often hard to spot the algorithm within the code.

HORVATH: Tell us more about these curious features of algorithms. They almost sound like a life form.

DOURISH: Well, I think what often makes algorithms seem to take on a life of their own, if you will, is that intersection with data that we were talking about earlier, because I said data and algorithms go together. There is often a case for instance where I can know what the algorithm does but if I don’t know enough about the data over which the algorithm operates, all sorts of things can happen.

There’s a case that I like to use as an example that came from some work that a friend of mine did a few years ago where he was looking at the trending topics on Twitter, and he was working particularly with people in the Occupy Wall Street movement who were sure that they were censored because their movement, the political discussion around Occupy Wall Street, never became a trending topic on Twitter. People were outraged, how can Justin Bieber’s haircut be more important than Occupy Wall Street? When they talked to the Twitter people, the Twitter people were adamant that they weren’t censoring this, but nonetheless they couldn’t really explain in detail why it was that Occupy Wall Street had not become a trending topic.

You can explain the algorithm and what it does, you can explain the mathematics of it, you can explain the code, you can show how a decision is made, but that decision is made about a dataset that’s changing rapidly, that’s to do with everything that’s being Tweeted online, everything that’s being retweeted, where it’s being retweeted, where it’s being retweeted, how quickly it’s being retweeted. What the algorithm does, even though it’s a known, engineered artefact, is still itself somehow mysterious.

So the lives that algorithms take on in practice for us when we encounter them in the world or when they act upon us or when they pop up in our Facebook newsfeed or whatever, is often unknowable and mysterious and lively, precisely because of the way the algorithm is entwined with an ever roiling dataset that keeps moving.

HORVATH: I love the term machine learning, and it’s really about computers interacting with computers, algorithms talking to other algorithms without the input of humans. That kind of spooks me. Where are we going?

DOURISH: Yeah. Well, I think the huge, burgeoning interest in machine learning has been spurred on by the big data movement. Machine learning is something that I was exposed to when I was an undergraduate student back more years ago than I care to remember; it’s always been there. But improvements in statistical techniques and the burgeoning interest in big data and the new datasets mean that machine learning has taken on a much greater significance than it had before.

What machine learning algorithms typically do is they identify again patterns in datasets. They take large amounts of data and then they tell us what’s going on in that. Inasmuch are we are generating more and more data and inasmuch as more and more of our activities move online and then become, if you like, “datafiable”, things that can now be identified as data rather than just as things we did, there is more and more opportunity for algorithms, and particularly for machine learning algorithms, to identify patterns within that.

I think the question, as we said, is to what extent one knows what a machine learning algorithm is saying about one. Indeed, even, as I suggested with the Twitter case, even for people who work in this space, even for people who are developing the algorithms, it can be hard for them to know. It’s that sort of issue of knowing, of being able to examine the algorithms, of making algorithms accountable to civic, political and regulatory processes, that’s where some of the real challenges are that are posed by machine learning algorithms.

HORVATH: We’re exploring the social life of algorithms with computer and social scientist Paul Dourish right here on Up Close. And yes, we’re coming to you no doubt thanks to several useful algorithms. I’m Andi Horvath. Let’s keep moving with algorithms. You say that algorithms aren’t just technical, that they’re social objects. Can you tell us a bit more what that means?

DOURISH: Well, I think we can come at this from two sides. One side is the algorithms are social as well as technical because they’re put to social uses. They’re put to uses that have an impact on our world. For example, if I’m on Amazon and it recommends another set of products that I might like to look at, or it recommends some and not others, there’s some questions in there about why those ones are just the right ones. Those are cases where social systems, systems of consumption and purchase and identification and so forth are being affected by algorithms. That’s one way in which algorithms are social; they’re put to social purposes.

But of course, the other way that algorithms are social is that they are produced by people and organisations and professions and disciplines and all sorts of other things that have a grounding in the social world. So algorithms didn’t just happen to us, they didn’t fall out of the sky, we have algorithms because we make algorithms. And we make algorithms within social settings, and they reflect our social ideas or our socially-constructed ideas about what’s desirable, what’s interesting, what’s possible and what’s appropriate. Those are all ways in which the algorithms are pre-social. They’re not just social after the fact but they are social before the fact too.

HORVATH: Paul, you’ve mentioned how algorithms are kind of opaque, but yet you also mention that we need to make them accountable, submit them to some sort of scrutiny. So how do we go about that?

DOURISH: This is a real challenge that a number of people have been raising in the last couple of years and perhaps especially in light of the flash crash, that moment where algorithmic processing produced a massive loss of value on the US stock market. There are a number of calls for corporations to make visible aspects of their own algorithms and processing so that it can be deemed to be fair and above board. If you just think for a moment about how much of our daily life in the commercial sector is indeed governed by those algorithms and what kind of impact a Google search result ordering algorithm has; there’s lots of consequences there, so people have called for some of those to be more open.

People have also called for algorithms to be fixed. This is one of the other difficulties is that algorithms shift and change; corporations naturally change them around. There was some outrage when Facebook indulged in an experiment in order to see whether they could tweak the algorithms to give people happier or less happy results and see if that actually changed their own mood and what kinds of things they saw. People were outraged at the idea that Facebook would tweak an algorithm that they felt, even though it obviously belonged to Facebook, was actually an important part of their lives. So keeping algorithms fixed in some sense is one sort of argument that people have made, and opening things up to scrutiny.

But the problem with opening things up to scrutiny is well, first, who can actually evaluate these things? Not all of us can. And also of course that in the context of machine learning, the algorithm identifies patterns in data, but what’s the dataset that we’re operating over? In fact, we can’t even really identify what those things are, we’re only saying there’s a statistical pattern and that some human being is going to come along and assign some kind of value to that. So some of the algorithms are inherently inscrutable. The algorithm processes data and we can say what it says about the data, but if we don’t know what the data is and we don’t know what examples it’s been trained on and so forth, then we can’t really say what the overall effect and impact is.

HORVATH: Will scrutiny of algorithms, whether we audit or control them, be affected by, say, intellectual property laws?

DOURISH: Well, this is a very murky area, and in particular it’s a murky area internationally, where there are lots of different standards in different countries about what kind of things can be patented, controlled and licensed and so forth. Algorithms themselves are patentable objects. Many people choose to patent their algorithms, but of course patenting something requires disclosing it and so lots of other corporations decide to protect their algorithms as trade secrets, which are basically just things you don’t tell anybody.

The question that we can ask about algorithms is actually also how they move around in the world and those intellectual property issues, licensing rights, patenting and so forth are actually ways that algorithms might be fixed in one place within a particular corporate boundary but also move around in the world. So no one has really I think got a good handle on the relationship between algorithms and intellectual property.

They are clearly valuable intellectual property, they get licensed in a variety of ways, but this is again one of these places where the relationship between algorithm and code is a kind of complicated one. We have developed an understanding of how to manage those things for code; we have a less good understanding right now of how to manage those things for algorithms. I should probably say, since we’re also talking about data, no idea at all about how to do this for data.

HORVATH: These algorithms, they’ve really got a phantom-like presence and yet they’ve got so much potential and possibility. They are practical tools that help with our lives. But what are the consequences of further depending upon the algorithms in our world?

DOURISH: I think it’s inevitable and not really problematic. From my perspective, algorithms in and of themselves are not necessarily problematic objects. Again, if we say that even the things that we teach our children for how to do multiplication are algorithms, there’s no particular problem about depending on that. I think again it’s the entwining of algorithms and data, and one of the things that an algorithmic world demands is the production of data over which those algorithms can operate, and all the questions about ownership and about where that algorithmic processing happens matter.

For example, one manifestation of an algorithmic and data-driven world is one in which you own all your data and you do the algorithmic processing and then make available the results if you so choose. Another version of that algorithmic and [data-centred/data-central] world is one in which somebody else collects data about you and they do all the processing and then they tell you the results, and there’s a variety of steps in between. So I don’t think the issue is necessarily about algorithms and how much we depend on algorithms. Some people have claimed we’re losing our own ability to remember things because now Google is remembering for us.

HORVATH: It’s an outsourced memory.

DOURISH: Yes, that’s right, or there’s lots of things about people using their Satnav and driving into the river, right, because they’re not anymore remembering how to actually drive down the road or evaluate the things in front of them, but I’m a little sceptical about those. I do think the question about how we want to harness the power of algorithmic processing, how we want to make it available to people, and how it should inter-function with the data that might be being collected from or about people, those are the questions that we need to try to have a conversation about.

HORVATH: Paul, I have to ask you, just like we use our brain to understand our brain, can we use algorithms to understand and scrutinise algorithms?

DOURISH: [Laughs] Well, we can and actually, we do. One of the ways in which we do already is that when somebody develops a new machine learning algorithm we have to evaluate how well it does. We have to know is this algorithm really reliably identifying things. We sort of pit algorithms against each other to try to see whether the algorithm is doing the right work and evaluate the results of other kinds of algorithms. So that actually already happens.

Similarly, as I suggested on the internet, the algorithm for congestion control is really a series of different algorithms happening in different places that work cooperative or not in order to produce or not a smooth flow of data. Though we don’t have to worry just yet I think about a sort of war between the algorithms or any kind of algorithmic singularity.

HORVATH: Paul, what do you mean by the singularity? Is this really a Skynet moment?

DOURISH: Well, the singularity is this concept that suggests that at some point in the development of intelligent systems, they may become so intelligent that they can design their own future versions and the humans become irrelevant to the process of development. It’s a scary notion; it’s one I’m a little sceptical about, and I think actually the brittleness of contemporary algorithms is a great example of why we’re not going to get there within any short time.
I think the question though is still how do we want to understand the relationship between algorithms and the data over which they operate? A great example is IBM’s Watson, which a couple of years ago won the Jeopardy TV show, and this was a real breakthrough for artificial intelligence. But on the other hand you’ve got to task, what is it that Watson knows about? Well, a lot of what Watson knows it knows from Wikipedia and I’m not very happy when my students cite Wikipedia and I’m not terribly sure that I need to be afraid of the machine intelligence singularity that also is making all its inferences on the basis of Wikipedia.

HORVATH: Paul, thanks for being our guest on Up Close and allowing us to glimpse into the world of the mysterious algorithm. I feel like I’ve been in the movie Tron.

DOURISH: [Laughs] Yes, well, we don’t quite have the glowing light suits unfortunately.

HORVATH: We’ve been speaking about the social lives of algorithms with Paul Dourish, a professor of informatics in the Donald Bren School of Information Computer Science at UC Irvine. You’ll find a full transcript and more info on this and all our episodes on the Up Close website. Up Close is a production of the University of Melbourne, Australia. This episode was recorded on 22 February 2016. Producer was Eric van Bemmel and audio recording by Gavin Nebauer. Up Close was created by Eric van Bemmel and Kelvin Param. I’m Dr Andi Horvath. Cheers.

VOICEOVER: You’ve been listening to Up Close. For more information visit upclose.unimelb.edu.au. You can also find us on Twitter and Facebook.

– Copyright 2016, the University of Melbourne.

Podcast Credits

Host: Dr Andi Horvath
Producer: Eric van Bemmel
Audio Engineer: Gavin Nebauer
Voiceover: Louise Bennet
Series Creators: Kelvin Param, Eric van Bemmel

This podcast was first published on 11 March 2016 by The University of Melbourne’s Up Close. Listen to the original podcast here

Text mining gold

Text mining gold

Karin Verspoor, Associate Professor in the Department of Computing and Information Systems at the University of Melbourne and Deputy Director of the University of Melbourne Health and Bioinformatics Centre, describes her early fascination with computers and exposure to multiple languages as key drivers for her becoming a computational linguist.

“When I was nine years old my parents bought me a programmable games console, and I discovered that I really enjoyed getting computers to do things from my imagination – it appealed to my logic and creativity.”

Karin went on to study BASIC – a high-level computer programming language developed for non-scientists that was popularised in the 1980s when the home computer market exploded.

Born in Senegal on the west coast of Africa to Dutch parents, Karin’s formative experience with the games console drove her study for an undergraduate degree with double major in Computer Science and Cognitive Sciences at Rice University in Houston, Texas. “I was drawn to the question of how to get computers to think and understand language,” Karin says.

“It was the perfect course because it combined computing, psychology, philosophy and linguistics.”

On completing her undergraduate studies, Karin swapped the heat of Texas for the cooler climate of Scotland, where she undertook a Master’s degree and PhD in Cognitive Science and Natural Languages at the University of Edinburgh. After finishing the PhD and doing a short stint as a research fellow at Macquarie University in Sydney working on the Dynamic Document Delivery project, which looked at generating natural language texts on demand, Karin left academe for a very different world: the business of start-ups.

“It was arguably the most exciting period of my career – I was involved in two start-ups with amazing ideas,” Karin says. “One of them was trying to build a thinking machine that was going to predict the stock market. It was crazy and so much fun, but it died after the dotcom bubble crash.”

Although the second start-up was much more successful, Karin missed the world of research and so took up a position at the prestigious Los Alamos National Laboratory in New Mexico, where she was able to leverage her business experience and pursue applied research in computational methods for the extraction and retrieval of knowledge from databases and information systems.

“Los Alamos was the home of the human genome project, and it was there I got into computational biology,” explains Karin, “I started working on text mining in the published molecular literature, which eventually led me to the University of Colorado and an opportunity to work exclusively in biomedical text mining.”

Text mining is the analysis of a natural language text – like English or French – by a computer. It’s used to discover and extract new information by linking together data from different written sources to generate new facts or hypotheses.

Karin’s current work at the University of Melbourne involves applying text mining to the field of biomedical research. “The rate of scientific publications is dramatically increasing in the biomedical space,” explains Karin, “The most important biomedical research repository called PubMed, hosted by the United States National Library of Medicine, has indexed over 25 million research publications.”

The multi-disciplinary nature of current biomedical research combined with the huge amounts of published material means that scientists today must stay abreast of a much broader range of literature to stay up-to-date.

“We’re looking to develop an automated computer system that analyses words to discover the relationships between them – to provide researchers with a tool that allows them to ask more structured questions and receive more targeted information,” Karin says.

– Carl Williams

Gravity waves hello

Gravity waves hello

Featured image above credit: NASA/C. Henze

For the first time, scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at the earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein’s 1915 general theory of relativity and opens an unprecedented new window onto the cosmos.

Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed.

The gravitational waves were detected by twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, located in Louisiana and Washington in the USA. The discovery, accepted for publication in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the Australian Consortium for Interferometric Gravitational Astronomy (ACIGA) and the GEO600 Collaboration) and the Virgo Collaboration.

Australian scientists from The Australian National University (ANU), the University of Adelaide, The University of Melbourne, the University of Western Australia (UWA), Monash University and Charles Sturt University (CSU), contributed to the discovery and helped build some of the super-sensitive instruments used to detect the gravitational waves.

Leader of the Australian Partnership in Advanced LIGO Professor David McClelland from ANU, says the observation would open up new fields of research to help scientists better understand the universe.

“The collision of the two black holes was the most violent event ever recorded,” McClelland says.

“To detect it, we have built the largest experiment ever – two detectors 4000 km apart with the most sensitive equipment ever made, which has detected the smallest signal ever measured.”

Associate Professor Peter Veitch from University of Adelaide says the discovery was the culmination of decades of research and development in Australia and internationally.

“The Advanced LIGO detectors are a technological triumph and the discovery has provided undeniable proof that Einstein’s gravitational waves and black holes exist,” Veitch says.

“I have spent 35 years working towards this detection and the success is very sweet.”

Professor David Blair from UWA says the black hole collision detected by LIGO was invisible to all previous telescopes, despite being the most violent event ever measured.

“Gravitational waves are akin to sound waves that travelled through space at the speed of light,” Blair says.

“Up to now humanity has been deaf to the universe. Suddenly we know how to listen. The universe has spoken and we have understood.”

With its first discovery, LIGO is already changing how astronomers view the universe, says LIGO researcher Dr Eric Thrane from Monash University.

“The discovery of this gravitational wave suggests that merging black holes are heavier and more numerous than many researchers previously believed,” Thrane says.

“This bodes well for detection of large populations of distant black holes research carried out by our team at Monash University. It will be intriguing to see what other sources of gravitational waves are out there, waiting to be discovered.”

The success of LIGO promised a new epoch of discovery, says Professor Andrew Melatos, from The University of Melbourne.

“Humanity is at the start of something profound. Gravitational waves let us peer right into the heart of some of the most extreme environments in the Universe, like black holes and neutron stars, to do fundamental physics experiments under conditions that can never be copied in a lab on Earth,” Melatos says.

“It is very exciting to think that we now have a new and powerful tool at our disposal to unlock the secrets of all this beautiful physics.”

Dr Philip Charlton from CSU says the discovery opened a new window on the universe.

“In the same way that radio astronomy led to the discovery of the cosmic microwave background, the ability to ‘see’ in the gravitational wave spectrum will likely to lead to unexpected discoveries,” he says.

Professor Susan Scott, who studies General Relativity at ANU, says observing this black hole merger was an important test for Einstein’s theory.

“It has passed with flying colours its first test in the strong gravity regime which is a major triumph.”

“We now have at our disposal a tool to probe much further back into the Universe than is possible with light, to its earliest epoch.”

Australian technology used in the discovery has already spun off into a number of commercial applications. For example, development of the test and measurement system MOKU:Lab by Liquid Instruments; vibration isolation for airborne gravimeters for geophysical exploration; high power lasers for remote mapping of wind-fields, and for airborne searches for methane leaks in gas pipelines.

This information was first shared by Monash University on 12 February 2016. Read their news story here

Search engine collaboration

Search engine collaboration

Lead researcher Associate Professor Falk Scholer is delighted with the $US56,000 Google Faculty Research Award for the project in the area of information retrieval, extraction, and organisation.

“It’s particularly exciting to receive support for this kind of research into search engine effectiveness from a leader in web search, like Google,” Scholer says.

The Google award will fund user-study experiments and support a top research student to work on the project, titled “Magnitude Estimation for the Evaluation of Search Systems”.

Scholer is running the project in collaboration with Professor Andrew Turpin, now of the University of Melbourne, but a former leader of RMIT’s celebrated Information, Search and Retrieval group (ISAR), which is ranked second in Asia/Oceania for Information Retrieval research.

“The project will be looking at a new approach for measuring whether users are satisfied with the results that they get from search engines,” he says.

“The aim is to enable more precise measurement of search effectiveness, and therefore allow future improvements to search systems to be identified more easily and reliably, supporting the faster development of impactful search technology.”

The current leader of ISAR, Professor Mark Sanderson, said the award underlined how information retrieval research at RMIT was well regarded internationally.

“Understanding how to improve search engines is an important research field here at RMIT, and getting support from Google is a big boost for us,” he says.

“I’m sure we’d all join in congratulating Falk, and wish him the best of luck with the project.

“It’s great to receive global recognition like this, especially as it follows on from his paper being selected as one of the top five presented at SIGIR 2015 – the world’s foremost information retrieval conference.”

SIGIR, the Association for Computing Machinery’s Special Interest Group on Information Retrieval, is the major international forum for the presentation of new research results and for the demonstration of new systems and techniques in Information Retrieval.

Scholer’s SIGIR paper, “The Benefits of Magnitude Estimation Relevance Assessments for Information Retrieval Evaluation”,  foreshadowed the project that has now won the Google award.

“The paper at SIGIR reported on an initial study in the area and the Google grant will enable us to investigate evaluation using magnitude estimation more deeply, in particular in the context of web search,” he says.

RMIT is ranked in the world’s top 100 universities for computer science and information systems. Find out more.

This article was first published by RMIT on 1 February 2016. Read the original article here.

Medical Research

Passage of the Medical Research Future Fund Bill

The successful passage of legislation to establish the Medical Research Future Fund (MRFF) Bill 2015 will significantly benefit the health and wellbeing of thousands of Australians. It will also strengthen Australia’s position as a global leader in medical research, says Professor James McCluskey, Deputy Vice Chancellor Research at The University of Melbourne.

“The full $20 billion accumulated in the fund will double Australia’s investment in medical research. This will allow more commercial spinoffs to be captured for the benefit of Australians through innovation, leading to economic activity and new, highly-skilled jobs,” says McCluskey.

With an initial contribution of $1 billion from the uncommitted balance of the Health and Hospitals Fund, and $1 billion provided per year until it reaches $20 billion, the MRFF will support basic and applied medical research – and will be the largest of its kind in the world.

To ensure the MRFF meets the needs of the medical research community, amendments to the Bill include directing funding towards transitional research, which attracts added research funding from the commercial sector. Also included are suggestions by the Australian Green Party, such as ensuring that funding for the Medical Research Council will not be shifted to the MRFF.

By providing an alternative source of funding to the National Health and Medical Research Council (NHMRC), the MRFF will make Australia more competitive with other countries that already have multiple funding agencies.

The UK, for example, has the Medical Research Council – the equivalent of the NHMRC – as well as the $40 billion funded Welcome Trust; a charitable foundation that invests in medical research. The USA also has a number of very generous funding sources, such as the Bill and Melinda Gates Foundation, the National Institutes of Health and the Howard Hughes Medical Research Foundation.

Researchers from the health, university, industry and independent medical research institute sectors will be able to access MRFF. It may also include interdisciplinary sectors such as medical physics, big data analytics and others contributing to national health and medical outcomes.

“Importantly, MRFF will also include initiatives that are currently not well supported by public research funding schemes,” says McCluskey. “For example, joint research with government or pharma [the pharmaceutical industry] in the development of new drugs and medical devices.”

The exact fields to be targeted will be determined by the Minister for Health, Sussan Ley. Advice will come from an independent board of experts including the CEO of NHMRC and eight experts in medical research and innovation, health policy, commercialisation, experience and knowledge in philanthropy, consumer issues, and translation of research into applications in frontline medical practice. The Minister will announce the members of the board shortly.

The MRFF will be established following Royal Assent of the Bill.

– Carl Williams