Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

Data science is the big draw in business schools
154 days ago

7 Effective Methods for Fitting a Liner
164 days ago

3 Thoughts on Why Deep Learning Works So Well
164 days ago

3 million at risk from the rise of robots
164 days ago

15 Highest Paying Programming Languages Trending
165 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
179520 views

Here's why so many data scientists are leaving their jobs
73395 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
67413 views

2018 Data Science Interview Questions for Top Tech Companies
56856 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
56178 views

Facebook's AI training models can now process 40,000 images a second

Jun 9, 2017 | 4236 Views

Artificial intelligence researchers at Facebook have figured out how to train their AI models for image recognition at eye-popping speeds.

The company announced the results of the effort to speed up training time at the Data@Scale event in Seattle this morning. Using Facebook's custom GPU (graphics processing unit) hardware and some new algorithms, researchers were able to train their models on 40,000 images a second, making it possible to get through the ImageNet dataset in under an hour with no loss of accuracy, said Pieter Noordhuis, a software engineer at Facebook.

"You don't need a proper supercomputer to replicate these results," Noordhuis said.

The system works to associate images with words, which is called "supervised learning," he said. Thousands of images from a training set are assigned a description (say, a cat) and the system is shown all of the images with an associated classification. Then, researchers present the system with images of the same object (say, a cat) but without the description attached. If the system knows it's looking at a cat, it's learning how to associate imagery with descriptive words.

The breakthrough allows Facebook AI researchers to start working on even bigger datasets; like, say, the billions of things posted to its website every day. It's also a display of Facebook's hardware expertise; the company made sure to note that its hardware is open-source, "this means that for others to reap these benefits, there's no need for incredibly advanced TPUs," it said in a statement throwing some shade at Google's recent TPU announcement at Google I/O.

Facebook plans to release more details about its AI training work in a research paper published to its Facebook Research page.


Source: Geekwire