Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
508 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
512 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
513 days ago

Humans have some learning to do in an A.I. led world
513 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
514 days ago

Google AI can create better machine-learning code than the researchers who made it
70794 views

More than 1 lakh scholarship on offer by Google, Know how to apply
57273 views

13-year-old Indian AI developer vows to train 100,000 coders
38841 views

Pornhub is using machine learning to automatically tag its 5 million videos
35349 views

How to win in the AI era? For now, it's all about the data
27171 views

Google's Hinton outlines new AI advance that requires less data

By Rajendra |Email | Nov 6, 2017 | 13077 Views

Google's Geoffrey Hinton, an artificial intelligence pioneer, on Thursday, outlined an advance in the technology that improves the rate at which computers correctly identify images and with reliance on less data.

Hinton, an academic whose previous work on artificial neural networks is considered foundational to the commercialization of machine learning, detailed the approach, known as capsule networks, in two research papers posted anonymously on academic websites last week.

The approach could mean computers learn to identify a photograph of a face taken from a different angle from those it had in its bank of known images. It could also be applied to speech and video recognition.

"This is a much more robust way of identifying objects," Hinton told attendees at the Go North technology summit hosted by Alphabet's Google, detailing proof of a thesis he had first theorized in 1979.

In the work with Google researchers Sara Sabour and Nicholas Frost, individual capsules - small groups of virtual neurons - were instructed to identify parts of a larger whole and the fixed relationships between them.

The system then confirmed whether those same features were present in images the system had never seen before. Artificial neural networks mimic the behavior of neurons to enable computers to operate more like the human brain.

Hinton said early testing of the technique had come up with half the errors of current image recognition techniques.

The bundling of neurons working together to determine both whether a feature is present and its characteristics also means the system should require less data to make its predictions.

"The hope is that maybe we might require less data to learn good classifiers of objects, because they have this ability of generalizing to unseen perspectives or configurations of images," said Hugo Larochelle, who heads Google Brain's research efforts in Montreal.

"That's a big problem right now that machine learning and deep learning needs to address, these methods right now require a lot of data to work," he said.

Hinton likened the advance to work two of his students developed in 2009 on speech recognition using neural networks that improved on existing technology and was incorporated into the Android operating system in 2012.

Still, he cautioned it was early days. "This is just a theory," he said. "It worked quite impressively on a small dataset" but now needs to be tested on larger datasets, he added. Peer review of the findings is expected in December.

Source: ET Tech