Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...Full Bio
Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...
Data science is the big draw in business schools
1045 days ago
7 Effective Methods for Fitting a Liner
1055 days ago
3 Thoughts on Why Deep Learning Works So Well
1055 days ago
3 million at risk from the rise of robots
1055 days ago
Top 10 Hot Artificial Intelligence (AI) Technologies
12 tips for designing and managing an AI-driven product
Here's a question that will keep future Artificial Intelligence (AI) entrepreneurs up at night: How do you manage a product when the software starts writing itself?
We're not quite there yet, but as we build smarter, more complex software that has elements driven by AI we're also making less predictable software. We know that AI will bring more capabilities to software, but it will also make software harder to design and manage since it will sometimes behave in unplanned ways. This is just a phenomenon that comes along with making complex systems. And, that's where we are going with software. This is where complexity theory meets software.
For most of us who have been entrepreneurs, executives, engineers, and product managers in the software industry, we have designed and managed software for decades safely assuming a reasonable level of input-output certainty. Meaning, when we input data, we can easily figure out what the correct output should be. This is because we have been working mostly on simple systems. If you entered A and B into the input, C would come out. If you don't get C, you know you have a defect that needs to be addressed. With simple systems, you can use the same set of test cases over and over again and expect the same outputs over and over again.
Intelligent agents and other dynamic AI-based systems turn this concept on its head as self-learning software adapts its outputs based on inputs from various interactions with other systems and people all the time. Some systems today have gotten pretty complex (especially in the enterprise), but introducing more AI-based algorithms will accelerate complexity beyond where we've been in the past. We'll have systems that go from being difficult to decipher why they did something to being indecipherable. And, with intelligent agents, we're massively increasing the number of potential inputs (sometimes, the input could be any combination of words in an entire language), which again increases dramatically the number of potential ways to interpret the input and provide a wider array of outputs.
For example, neural nets provide outputs based on inputs, but in between the input and output is the black box of computation. We won't know why exactly the outputs were generated from those particular inputs. And, new training (how the algorithm updates its learning) mean that the outputs may change given the same inputs. So, dynamic updates from a continuously learning piece for software means that there will be layers of learning that happen real-time that will impact outputs in a way that won't be predictable. And, some of these outputs will be fed into other parts of the system, creating additional layers of complexity. We are moving to more complex system design. The term for the new, unexpected things produced by complex systems is called emergence. And, our software will only increase in emergent behaviors as we make them more complex.
This is more of an observation and area of planning than a concern for me. We work with people every day who are unpredictable. No one knows exactly all the reasons people do what they do from moment to moment. Yet, we have found ways to collaborate between humans and get work done. And, for software, we'll need to think through the issues as we build systems that become more complex. So, based on experience, I've created some fundamental tips that can help with the above issues as well as other issues when building AI-driven products and AI-based intelligent agents. Note: depending on what you are building, you may need to ignore or alter some of the tips for yourself, based you your particular goals.
1. Domain focus
Limiting your domain can help limit complexity. So, it's a good idea to simplify and focus some things that you have control of, like the domain of expertise of your software. Keep your product constrained into a narrow domain (focused on a logical set of jobs to do for the customer and a logical set of knowledge around an expertise, for example) at first and learn before you expand into other domains.
2. Learning feedback loops
Every interaction is a chance to learn. Your systems should learn something from all (or almost all) interactions with humans and other systems. Feedback loops are needed for your software to self-correct and learn, and also gives you information to know how to adjust your product and plan for the future. Within your domain, be cognizant of what to optimize for at a high level, but don't over-optimize too soon. Although the AI product can be murky as you explore product feedback loops, you need to choose a more general, large set of capabilities at first and then look for problems that you will be solving for the user. As your user uses the product, your product optimizations can be based on actual customer usage over time. Read More