Artificial intelligence’s “black box” decision-making presents challenges for AI and machine learning innovators who want to file for patents. Practitioners need to consider the unique properties of AI technology to secure meaningful and enforceable patent protection for these inventions.
The problem: “Black box” AI algorithms
Machine learning relies on training a piece of software to make decisions by providing feedback on the output it produces while processing a set of training data. The programmers create the initial structure of the software and define the feedback heuristics used to train it, but the software produced by the training process is often a jumble of weights and interconnections between nodes in a neural network or some similarly human-illegible chunk of math. Thus, while AI systems created via ML often exhibit highly effective decision-making, they do so without providing their creators (or anyone else) any meaningful insight as to the underlying logic of the system.
To read Stacy Rush (former associate) and Matthew Norward's (former associate) full article, please visit the Canadian Lawyer Magazine‘s website.