The research was published and presented on May 29 at the annual IEEE International Conference on Robotics and Automation ICRA2023 – the premier international robotics conference – this year held in London.
Lead researcher QUT PhD student Connor Malone said there were many Visual Place Recognition (VPR) techniques and positioning methods out there, and each tried to tackle a different problem, and each one worked better in some circumstances than others.
“Sometimes a robot needs to operate in places where environmental conditions change, you might have snow, rain or lighting conditions, or even just temporal or structural changes with buildings. And so different techniques tend to tackle different problems,” Mr Malone said.
“What we are proposing is a system that can switch between those different techniques in response to different problems in the environment. So rather than the impossible goal of having one solution that does everything, we use the solutions that are already made to make a more robust system.”
“A naive approach would be to run all of these different techniques in parallel and use the ones that appear to be working better at a particular time, but this is very computationally intensive,” Mr Malone said.
“We have run a known single high-performance technique all the time and can predict – without having to run them all – which of the other techniques to add in to get the best performance.
“This system could potentially be used on any sort of autonomous vehicle platform. A lot of the testing and data sets that we used were from self-driving car applications.”
“The particular focus of this system is about getting more bang for your buck. So, making cheap platforms, with cheap sensors and not a lot of computer power,” Mr Malone said.
The research conducted by QUT PhD student Connor Malone; Dr Tobias Fischer, a lecturer in the School of Electrical Engineering & Robotics; former QUT Research Fellow Stephen Hausler, now a research scientist at CSIRO; and Joint Director at the QUT Centre for Robotics and Australian Research Council Laureate Fellow, Professor Michael Milford, involved reviewing many data sets that generally consist of many images.
“We reviewed sequential images as a vehicle drove through an environment and labelled those images as to which particular techniques will work for that particular image,” Connor Malone said.
“We then developed training systems that we call ‘neural networks’, which are in essence are AI systems to help them to learn for a particular image which technique is going to work the best,” Mr Malone said.
“The AI system is learning which of these conditions that it is having to account for – so whether it’s a difference in the appearance of a place, the lighting conditions, or seasonal changes,” Mr Malone said.
“The old approach can drive up the cost of the computer hardware or slow down the speed at which the robot can operate, which is not good from a commercial or usability perspective.”
“Everybody is trying to go for the holy grail of one system that fits everything and thus we have ended up with many different systems that are good at different things. We do this switching mechanism, where the images come in, it switches between different techniques, but it is done in a very computationally cheap way.
“It does not take a lot of hardware and resources to actually do this. And the time that it takes to decide the switching is exceedingly small,” Professor Milford said.
The research is partially funded by Amazon via an Amazon Research Award, with additional support from Michael Milford’s ARC Laureate Fellowship and QUT Robotics.Novel approach offers cheap, reliable positioning systems for robots