satyamkapoor

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First. ...

Full Bio 

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First.

Success story of Haptik
375 days ago

Who is afraid of automation?
375 days ago

What's happening in AI, Blockchain & IoT
376 days ago

3 million at risk from the rise of robots
376 days ago

5 ways Machine Learning can save your company from a security breach
376 days ago

Google Course for IT beginners, certificate in 8 months: Enrollment starts on Coursera today, check details
31509 views

7 of the best chatbot building plaftorms out there
18270 views

Could your job be taken over by Artificial Intelligence?
16968 views

IIT Madras launches Winter Course on Machine Intelligence and Brain Research
16596 views

You can now train custom machine learning models without coding using Google's AutoML
13809 views

To understand big data, convert it to sound

By satyamkapoor |Email | Mar 28, 2018 | 6387 Views

Some researchers exploring a radical concept say that humans are far better at figuring out data pattern changes audibly than they are able to do graphically in two dimensions. The researchers feel that servers full of big data can be far more comprehensible if the numbers are moved off computer screens or for that matter hard copies and sonified which means converted into sound.


They say this since while listening to music, nuances almost jump out at you for instance a bad note. In fact some researchers at Virginia Tech say the same thing is may be applicable to number crunching as well. Using this, data set anomaly spotting or comprehension overall could all be enhanced.


The team behind this idea is working to prove this by testing the theory using a recently built 129-loudspeaker array which has been installed in a giant immersive cube inside Virgina Tech's science lab/performance space, the Moss Art Center.


How researchers are testing their big data theory

The test subjects  being used are the data sets of earth's upper atmosphere where each bit of atmospheric data has been converted into a unique sound. These pieces of audio are varied by making use of changes in pitch, amplitude and volume.


The immersive Cube of the school contains one of the biggest multichannel audio systems in the world according to the university and sounds get produced in unique 360 degree 3D format.

Spatial and immersive representation of big data using sound is a rather unexplored area of research but it provides a very unique perspective. 


Source: HOB