In this post we’re continuing the previous article, which set out to decode the complex terminology around what Applica can do.
The emergent field of AI is often a mystery to non-experts. It’s a challenge to learn what the terms mean, let alone how they’re used, which is often erroneously. That’s why we want to clarify some important concepts and simplify talking about what we do and how we do it. Because for our technology to be useful to our clients, we first need to help them understand what it is they are—and aren’t—looking for.
In Part 1 we tackled the difference between deep learning and machine learning, then went on to discuss the hassle of coding for template-based systems using features and rules, which are basically the search criteria and action sequences, respectively, that an AI relies on to do its job—and which even recently used to be the norm in document automation. But then next-wave solutions like ours came along, and manual input of features and rules is finally becoming a thing of the past.
So how does deep learning do away with requiring this kind of external input? Unlike actual human intelligence, which is holistic by comparison, all artificial intelligence is feature-based—and this includes deep learning. But what makes deep learning special is the way it is capable of identifying features and rules all by itself, based on exposure to data alone. Deep learning works by essentially “teaching itself” to spot relevant data and group it accordingly. What is so hard to imagine—and explain—is that even the experts can’t visualize the exact scheme that shows how the AI encodes and stores these “instructions.” What’s important is that it does so reliably and more quickly than any human could ever identify a feature or rule, type a command, and hit “enter.”
A good example of the way deep learning generates its own, internal, analysis comes from the natural languages, whose vocabulary and grammar, too, can be taught either based on instruction (which is the equivalent of externally identifying and listing features and rules) or based on exposure to actual speech and writing (the equivalent of letting the AI loose on the right training data). Humans often do better with the latter, which explains how four years of high school French can be quite ineffectual ascompared to one immersive summer in Paris. Deep learning is the same way—and, in fact, automating translation was one of the pioneering fields for deep learning-based technology, which is now being applied to new areas, including document automation.
Deep learning is often described in terms of something called generativity, which refers to the way the AI can “guess what comes next.” In other words, it generates content by making best-case-scenario guesses as to what might logically follow a given sequence. In the case of questions, it predicts answers. Generative models are contrasted with the older so-called discriminative models, in which the machine discriminates—or selects—among mutually exclusive options, such as zeroes and ones. Here, learning is limited to distinguishing between specific categories, classes, or labels. (Think classifying documents as contracts or numbers as dollar amounts.) Generative training is more free-form, and that’s why deep learning-based generative models lend themselves to predictive typing, answering spontaneous questions, summarizing text, or translation. Of course, for AI to predict anything with accuracy, it needs to know a lot about the right things. And that’s where good training data comes in.
Since deep learning generative models don’t require human engineers to analyze features and rules or to enter them into the system by hand, the engineer’s role shifts from writing code to selecting what to show the machine. The work is now less like building and more like gardening, breeding, or genetic engineering. And the importance of data selection cannot be underestimated, because data must be chosen (and rejected!) in such a way that the machine can best learn what to do. This is a problem of quality, not quantity. Terabytes of Reddit posts won’t help your deep learning analyze business documents the way several thousand well-chosen business documents will. That’s part of the way Applica is doing things differently. Our training sets are simply best in class because we give our AI the right nourishment in the form of healthy data.
What does all this mean for your business? Not only can you gain speed, accuracy, and scalability with regard to your document processing needs, but you can also move your best people to more important and more fulfilling work. And there’s more. If you’ve heard of “long tails,” you may know that they refer to those trailing horizontal curves that approach zero in graphs charting various phenomena. The typical example is word use frequency: very high numbers for very few items (the, you, was), a steep drop, and then the signature “long tail,” which extends to include thousands and thousands of rare words (liminal, repudiate, anon).
In AI-based document processing, the long tail principle has also held: a small number of document types could be processed in numbers significant enough to be worth coding for, and the rest were not viable for delegating to machines. The programming costs were simply too high to offset the human costs—and lag times, and inaccuracy—of doing things manually. But now there’s Applica, making automation possible for all those documents that were out of reach for our predecessors. And that spells progress for more than just your company’s workflows and turnaround times—it’s a path to entirely new business ideas and revenue streams. Because our deep learning doesn’t just find what you tell it to look for: it can show you what’s there to be found.
Want to find out more about how Applica can help put deep learning to work for you? Contact us today and get started on a better tomorrow.
This was Part Two in a two-part post.