I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...
I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing
Google Brain, the search giant's internal artificial intelligence division, has been making substantial progress on computer vision techniques that let software parse the contents of hand-drawn images and then recreate those drawings on the fly. The latest release from the division's AI experiments series is a new web app that lets you collaborate with a neural network to draw doodles of everyday objects. Start with any shape, and the software will then auto-complete the drawing to the best of its ability using predictions and its past experience digesting millions of user-generated examples.
The software is called Sketch-RNN, and Google researchers first announced it back in April. At the time, the team behind Sketch-RNN revealed that the underlying neural net is being continuously trained using human-made doodles sourced from a different AI experiment first released back in November called Quick, Draw! That program asked human users to draw various simple objects from a text prompt, while the software attempted to guess what it was every step of the way. Another spinoff from Quick, Draw! is a web app called AutoDraw, which identified poorly hand-drawn doodles and suggested clean clip art replacements.
All of these programs improve over time as more people use them and keep feeding the AI learning mechanism instructive data. The end goal, it appears, is to teach Google software to contextualize real-world objects and then recreate them using its understanding of how the human brain draws connections between lines, shapes, and other image components. From there, Google could reasonably deploy even better versions of its existing image recognition tools, or perhaps even train future AI algorithms to help robots tag and identify their surroundings.
In the case of this new web app, users can now work alongside Sketch-RNN to see how well it takes a starting shape and transforms it into the desired object or thing you're trying to draw. For instance, select ‚??pineapple‚?? from the drop-down list of preselected subjects and start with just an oval. From there, Sketch-RNN attempts to make sense of the object's orientation and decides where to try and doodle in the fruit's thorny protruding leaves: