Insights on Data Science: Hyperbolic Embeddings


Insights on Datascience.png

An Interview with the Data Science team @ ClearAccessIP

 Big things are happening at ClearAccessIP, not the least of which is a major update to our machine learning (“ML”) algorithms. The team is on the cutting edge of ML and is in the process of utilizing hyperbolic embedding, a recently conceptualized spatial network in AI, in answering problems around patent analysis and valuation. Despite its introduction and development over only the past year.

The following contains excerpts from an interview with our data science team:

Tell us about Hyperbolic Embedding as it relates to ClearAccessIP.

Hyperbolic embedding is a cutting edge technique in machine learning. In recent years there have been a number of developments in natural language processing involving representing characters, words, and sequences of words as vectors. Traditionally these vectors are embedded in a flat Euclidean space, hyperbolic embeddings embed these changes in curved space. The main advantage of this is that volumes in hyperbolic spaces grow exponentially instead of polynomially as a function of radius. Early research in the area has demonstrated that this allows for achieving as good or sometimes even better results using vectors with far fewer coordinates, which reduces computational time and memory requirements.

Very few researchers have even begun to explore using these concepts in their work, but we’ve been working closely with Stanford professors to apply these concepts to problems in patent analysis and valuation. These promise to be some of the first meaningful applications of this technology to find their way into industry.

Why would a company want to utilize hyperbolic embedding for machine learning?

Hyperbolic embedding gives us better results, faster, because it takes less computing power to come up with those results. We all have limited resources and budgets, and getting equivalent output otherwise would be vastly more expensive, if not impossible, with traditional methodologies.

What unique challenges do you face in analyzing the patent corpus when compared with other datasets and fields?

There is no database of training data for patents, at least, not at the scale we would require, so we have to figure it out on our own. Unsupervised learning methods are stock for training word and paragraph vectors, but these techniques are still new and far from perfect, so we have to be creative about how we combine and support these methods to produce better results.

Additionally, once we get the results, vetting them is difficult without an attorney’s perspective, and that’s assuming that attorneys even agree about what a good result might look like. Patent attorneys and patent agents in prosecution firms, litigation houses and licensing organizations expect different outputs. In fact, that’s a big part of the reason why we don’t have training data, for lots of problem we attack, the “right” answer depends on who you’re talking to. Even showing seemingly objective measures and language in documents like enablement reports still leads to different perspectives on breadth of results.

But for us, the lack of training data makes these problems particularly exciting. It forces innovation and original insight, we can’t necessarily just throw more data at a model to get it to improve results. To give a comparison, many computer vision models rely on huge data sets (such as imagenet) that are the product of decades of image analysis and hand labeling. For marketing in social networks, there are years of human interactions to analyze, and there are good metrics for determining how well your personalized ads performed, clicks. But this doesn’t exist in patents to the same degree. Creating training data has been prohibitively expensive, but with cutting edge tools, like Snorkel, we may be able to do so in a more cost-effective manner.

Where do you see the future of ML?

Things are only going to get better as people have more time to research, develop and refine methods. At ClearAccessIP we’re excited to get our hands dirty provide and test out applications of new ideas and research. As such, the big advancements will come as more research is published on the topic and we’ve had time to test and see what works for us and what doesn’t. This research can only happen if venture capital and corporations continue to make the pledge towards increasing the availability of AI in our everyday life, and I’m confident that the benefits of this research will shine through.

Learn more about our data science team and their developments in machine learning-based patent search here.