Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
631 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
635 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
636 days ago

Humans have some learning to do in an A.I. led world
636 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
637 days ago

Google AI can create better machine-learning code than the researchers who made it
72213 views

More than 1 lakh scholarship on offer by Google, Know how to apply
58797 views

13-year-old Indian AI developer vows to train 100,000 coders
40329 views

Pornhub is using machine learning to automatically tag its 5 million videos
36747 views

Deep Learning vs. Machine Learning A Data Scientist's Perspective
29712 views

Artificial Intelligence Analyses Distortions In Spacetime A Whopping 10 Million Times Faster

By Rajendra |Email | Sep 1, 2017 | 8115 Views

Artificial intelligence isn't just good for customer service chatbots and personal assistants on your mobile, advances in the field are also helping to revolutionise scientific research.

Scientists from the Department of Energy's SLAC National Accelerator Laboratory and Stanford University have shown that a form of AI known as neural networks can accurately analyse complex distortions in spacetime a whopping ten million times faster than traditional methods.

"Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone's computer chip," said postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published in Nature.

The team at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of SLAC and Stanford, used the neural networks to look at images of strong gravitational lensing, where a picture of a far-flung galaxy is multiplied and distorted by the gravity of a massive object that's closer to us, such as a galaxy cluster. These distortions allow scientists to figure out how mass is distributed in space and how that distribution changes over time, both of which are properties linked to the invisible dark matter that makes up 85% of our Universe and to dark energy.

Previously, neural networks have been used in astrophysics for simple applications, such as determining whether a picture showed gravitational lensing or not. But this experiment went far beyond that.

"The neural networks we tested three publicly available neural nets and one that we developed ourselves were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy," said the study's lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.

As our ability to peer further and further across the Universe develops, so does the volume of data we acquire. But sifting through all that data becomes a monumental task.

The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.

"We won't have enough people to analyse all these data in a timely manner with the traditional methods," Perreault Levasseur said. "Neural networks will help us identify interesting objects and analyse them quickly. This will give us more time to ask the right questions about the universe."

As the name suggests, neural networks are modelled on how the human brain works, where a dense network of neurons quickly processes and analyses information.

"The amazing thing is that neural networks learn by themselves what features to look for," said KIPAC staff scientist Phil Marshall, a co-author of the paper. "This is comparable to the way small children learn to recognise objects. You don't tell them exactly what a dog is; you just show them pictures of dogs."

But in this case, Hezaveh said, "It's as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs' weight, height and age."


The scientists used the Sherlock high-performance computing cluster at the Stanford Research Computing Center for this test, but one of the neural networks they tested was designed to work on iPhones, raising the possibility that these complex deductions could be done at high speed on a scientist's mobile phone.

Source: Forbes