The terminology around what Applica can do is complex. Misinformation is rampant across the entire emergent field of AI and it’s hard enough to keep track of what the terms mean, let alone how they’re used, which is often erroneously. That’s why we’ve been creating content that simplifies understanding of how we do what we do. Because we know that in order for our clients to find us, we need to help them understand what it is they are—and aren’t—looking for.
In this two-part series of posts, we will explain how the Applica deep learning generative language model differs from other technologies that sometimes describe themselves using the same terms—and why it’s crucial to note the difference. We’ll start by banishing a pervasive myth about deep learning and go on to discuss feature- and rule-based systems, generativity, and the notion of those so-called “long tails” in our groundbreaking line of work.
First—let’s make sure deep learning and machine learning are being defined correctly—not as opposites, but as inherently connected. The relationship between these often contrasted types of tech is not “either-or” but rather “one is a special case of the other.” Thus, machine learning is an all-encompassing category of computer-based automation solutions, whereas deep learning is a narrow, extra-auspicious sub-type of machine learning. What makes deep learning special? In brief, it does not require external programming using human-generated coding for lists of features and the rules for handling them. Other types of machine learning require that the machine be taught and asked specific things, all of which must be planned and executed by humans. What makes deep learning indeed “deep” is the way it makes the process of machine learning self-governing and independent of the people writing and running the software. It is wrong and, in fact, misleading to contrast deep learning with machine learning, as so often happens in simple sales messaging. What makes sense instead is to distinguish deep learning from legacy rule-based and template-based solutions, or from non-neural machine learning, or from all the various kinds of machine learning that are “non-deep.”
In light of this, it’s fair to say that Applica’s proprietary TILT model is fully deep-learning based. It is at the forefront of what machine learning has been able to achieve in the field of meaning extraction and business workflow improvement.
Second, we mentioned features and rules, which are the bread and butter of the old-school approach. Features and rules can be likened to the nouns and verbs IT engineers use to tell a machine what to do. What’s crucial here is not only the fact that the right question must be asked with regard to the data available, but also that the data is defined in terms of the right features (otherwise some “right” data won’t get collected and/or some “wrong” data will). Features are anything that can be used to group data, for example “numerical” or “more than 800 characters in length” or “inserted in ink by signer” or “in footer of page” or “not in footer of page.” Then, the rules for handling the data need to be correctly phrased and sequenced. Look for X, then do Z. If P, don’t Q, otherwise Q everything. And—this is where even good coding often fails—while you’re looking for X, don’t skip Y, which is a common misspelling, or x, or Xx, or Xy, and so on. This kind of precision can require downright inhuman infallibility on the part of the programmer—or absolute perfection from the highly imperfect machine learning solutions of the past.
In more practical terms, here’s a look at features using an example: Imagine you need to extract all price quotes, bills due, and amounts paid from a set of somewhat differing documents—all of which were filled out by humans, so they’re likely to reflect the ways humans do things imperfectly. Some of the figures you need to collect are preceded by a dollar sign or by the abbreviation USD. In others, this designation may follow the relevant numbers. In others still, the label might be missing altogether. And there may be variability in the way cent amounts are indicated—a period here, a comma there, superscript, a fraction, or nothing at all. Some amounts will be scribbled illegibly or might contain typos. Others will have errors—the simple kind, easy to see and to correct based on context. Others will be false in insidious ways, because there’s either sophisticated fraud or convoluted math involved. If an engineer is doing the coding, all of the above variability must be spelled out using language the machine can understand. Even with straightforward inquiries, it’s easy to miss the mark at the features-defining stage.
Let’s remember that artificial intelligence is not like human intelligence at all: we humans can understand many things without precisely knowing why and how we understand them. Thus, whether we’re identifying dollar amounts, recognizing something as a dog, or decoding the intentions behind facial expressions, we get by without features and rules. AI doesn’t.
Luckily, Applica does not require users to input lists of features or rules. This is thanks to our technology’s smart basis in a special kind of generativity, which we will define in Part Two of this post.
Interested in how Applica can help your company level up and put deep learning to work for your business model? Contact us today and start planning your strategy for tomorrow.